Are there any test scenarios/process to validate OpenEdge Replication in case of disaster?
Any documents/test scenarios that prove the viability of the replication process would be helpful.
First and foremost you need to know that your replication is up and running.
Replication does not replace backup
(If a user deletes all records or drops tables from the database - that change will be replicated!)
Once a disaster has happened and you need to switch from your source to the target you should do some validation of the database before turning into a valid master. OpenEdge will most likely complain if there are any big errors in the database, transactions not complete etc. But only you can check if your database contains what it should. All crashes tend to loose something - at the very least the not yet committed transactions.
Again: Replication does not replace backup
Validating Replication
You can validate the status of the running replication in different ways:
Virtual System Tables
You can access lots of useful data in the VSTs. See the product documentation for further details
_Database-Feature
The _Database-Feature VST displays the list of features that are active and/or enabled within the
database.
_Repl-Server:
Provides detailed OpenEdge Replication server information
_Repl-AgentControl:
Provides detailed information about the OpenEdge Replication agents this OpenEdge Replication server is controlling
_Repl-Agent
Provides detailed OpenEdge Replication agent information
Example code:
FIND FIRST _Database-Feature NO-LOCK WHERE _database-Feature._dbFeature_name = "Openedge Replication" no-error.
IF AVAILABLE _Database-Feature THEN DO:
DISPLAY
_Database-Feature._DBFeature_Enabled = "1" LABEL "Repl enabled"
_Database-Feature._DBFeature_Active = "1" LABEL "Repl running"
WITH FRAME frame1 SIDE-LABELS 1 COLUMN TITLE "Replication".
END.
FIND FIRST _Repl-Server NO-LOCK.
IF AVAILABLE _Repl-Server THEN DO:
DISPLAY
_Repl-Server._ReplSrv-AgentCount LABEL "# Agents"
_Repl-Server._ReplSrv-BlocksSent LABEL "Blocks sent"
_Repl-Server._ReplSrv-StartTime LABEL "Started at"
_Repl-Server._ReplSrv-LastBlockSentAt LABEL "Last block sent"
WITH FRAME frame2 SIDE-LABELS 1 COLUMN TITLE "Repl Server".
END.
/* To access _Repl-AgentControl you need to connect a soure/master db and not a target/slave db*/
FIND FIRST _Repl-AgentControl NO-LOCK NO-ERROR.
IF AVAILABLE _Repl-AgentControl THEN DO:
DISPLAY
_Repl-AgentControl._ReplAgtCtl-ConnectTime LABEL "Connected at"
_Repl-AgentControl._ReplAgtCtl-RemoteDBName LABEL "Remote DB" FORMAT "x(20)"
_Repl-AgentControl._ReplAgtCtl-RemoteHost LABEL "Remote Host" FORMAT "x(20)"
_Repl-AgentControl._ReplAgtCtl-LastBlockSentAt LABEL "Last block sent"
_Repl-AgentControl._ReplAgtCtl-Method LABEL "Method"
(_Repl-AgentControl._ReplAgtCtl-Status = 3049) LABEL "Normal Status"
(_Repl-AgentControl._ReplAgtCtl-CommStatus = 1) LABEL "Connected"
WITH FRAME frame3 SIDE-LABELS 1 COLUMN TITLE "Repl Agent Control" WIDTH 80.
END.
ELSE DO:
DISPLAY "Not a source".
END.
/* To access _Repl-Agent you need to connect a target/slave db and not a source ...*/
FIND FIRST _Repl-Agent NO-LOCK NO-ERROR.
IF AVAILABLE _Repl-Agent THEN DO:
DISPLAY
(_Repl-Agent._ReplAgt-Status = 3049) LABEL "Normal Status"
(_ReplAgt-CommStatus = 1) LABEL "Connected"
WITH FRAME frame4 SIDE-LABELS 1 COLUMN TITLE "Repl Agent".
END.
ELSE DO:
DISP "Not a slave db..".
END.
Command Line
You can use command line tool dsrutil to access information about the replication.
Example:
This will give you a interactive prompt for checking various things:
dsrutil db -C monitor
You can also use other options (see manuals) for scripting.
Example:
dsrutil db -C status detail
Writes simply 6021 (and returns OK to the OS) if everything is OK. Check the OE Replication Documentation below for more information.
Sources:
OE 11.4 Replication Documentation
OE 11.4 Database Management - Chapter 28
Related
I'm new to SQLite.
Say a row in your table has a column status that is persisting a state-machine. A typical operation on such a row would be to read the current state, and set the next state depending on that, the code having some side-effects.
In a transaction:
val state = db.execute("SELECT state FROM ...")
if (state == "READY_TO_SEND") {
.... do some side-effecting stuff, like sending an email ...
db.execute("UPDATE ... SET state = "MAIL_SENT")
}
In Postgres I could pessimistically lock the row using "SELECT ... FOR UPDATE" to synchronize in this case.
In case of SQLite with WAL mode enabled, if I understood correctly, the read of the current state could happen concurrently by two transactions. Both threads would create the side-effects. Then both transaction would want to upgrade to write. One of them will succeed at an upgrade, the other not, but it's too late to synchronize the side-effects.
Did I understand the SQLite isolation mechanism correctly?
How would you go about synchronization in this case?
Thanks!
I believe that in your example there would be no issue.
If both reads were concurrent and read the same row(s) both would want to update and set the state from "READY_TO_SEND" to "MAIL_SENT". Even if the updates ran concurrently the concurrency would be disturbed as only 1 writer can run at a time. Thus both updates would be applied but sequentially (even though only 1 would be required) and then end result would be the same.
However, since there is only one WAL file, there can only be one writer at a time.
SQLite - Write-Ahead-Logging
Perhaps a bigger issue/side affect could be if the update where UPDATE .... SET state = 'MAIL_SENT' WHERE state = 'READY_TO_SEND'; and the result of the update (number of rows updated) where then used to indicate if the email where sent. The 2nd could be 0 as the WHERE clause would then supress the update. Thus the result from the update might lead to an indication that the email was not sent when it has been sent.
However, this would be a design issue as obviously they are separate emails.
Warning, I am a complete noob with SQLite and Node-Red.
I am working on a project to scan and read car license plates. I now have the hardware up and running, it is passing plate information to a very basic SQLite 3 table of two records through Node-Red on a Raspberry Pi 3.
I can run instant queries, where a module sends over an exact query to run, ie
SELECT "License_Plate" FROM QuickDirtyDB WHERE "License_Plate" LIKE "%RAF66%"
This will come back with my plate RAF660, as below
topic: "SELECT "License_Plate" FROM QuickDirtyDB WHERE "License_Plate" LIKE "%RAF66%""
payload: array[1]
0: object
License_Plate: "RAF660"
When I automate and run this query it will not work, have been playing with this for three days now.
I am even unable to get a very basic automated query to work like
'var readlpr = msg.payload;
msg.topic = 'SELECT "License_Plate" FROM QuickDirtyDB WHERE "License_Plate" = ' + readlpr + ''
return msg;'
that's two single quotes at the end of the query line.
This is sent through to the query as below, it is the output from the debug node, exactly what is going into the query.
"SELECT "License_Plate" FROM QuickDirtyDB WHERE "License_Plate" = RAF660 "
and the error that comes out is,
"Error: SQLITE_ERROR: no such column: RAF660"
After this is working, I need to work out how I can allow a mismatch of two characters in case the OCR software either misread two characters or even drops two characters entirely. Is this something that a query can handle, or will I have to pass many plate details to a program to work out if I have a match?
I thought I would have had to run a query to create some kind of a view and then requery my read plate vs that view to see which plate in the database is the closest match, not sure if I have the terminology correct, view, join, union etc.
Thank you for looking and any suggestions you may have.
I will probably be going home in about an hour, so may not be able to check back in till Monday
RAF660 is a string and needs to be quoted "RAF660"
License_Plate is a column and should not be quoted.
The way you have it reads as fetch the rows where the RAF660 column is set to the value "License_Plate".
I have a file receive location which is schedule to run at specific time of day. I need to trigger a alert or mail if receive location is unable to find any file at that location.
I know I can create custom components or I can use BizTalk 360 to do so. But I am looking for some out of box BizTalk feature.
BizTalk is not very good at triggering on non-events. Non-events are things that did not happen, but still represent a certain scenario.
What you could do is:
Insert the filename of any file triggering the receive location in a custom SQL table.
Once per day (scheduled task adapter or polling via stored procedure) you would trigger a query on the SQL table, which would only create a message in case no records were made that day.
Also think about cleanup: that approach will require you to delete any existing records.
Another options could be a scheduled task with a custom c# program which would create a file only if there were no input files, etc...
The sequential convoy solution should work, but I'd be concerned about a few things:
It might consume the good message when the other subscriber is down, which might cause you to miss what you'd normally consider a subscription failure
Long running orchestrations can be difficult to manage and maintain. It sounds like this one would be running all day/night.
I like Pieter's suggestion, but I'd expand on it a bit:
Create a table, something like this:
CREATE TABLE tFileEventNotify
(
ReceiveLocationName VARCHAR(255) NOT NULL primary key,
LastPickupDate DATETIME NOT NULL,
NextExpectedDate DATETIME NOT NULL,
NotificationSent bit null,
CONSTRAINT CK_FileEventNotify_Dates CHECK(NextExpectedDate > LastPickupDate)
);
You could also create a procedure for this, which should be called every time you receive a file on that location (from a custom pipeline or an orchestration), something like
CREATE PROCEDURE usp_Mrg_FileEventNotify
(
#rlocName varchar(255),
#LastPickupDate DATETIME,
#NextPickupDate DATETIME
)
AS
BEGIN
IF EXISTS(SELECT 1 FROM tFileEventNotify WHERE ReceiveLocationName = #rlocName)
BEGIN
UPDATE tFileEventNotify SET LastPickupDate = #LastPickupDate, NextPickupDate = #NextPickupDate WHERE ReceiveLocationName = #rlocName;
END
ELSE
BEGIN
INSERT tFileEventNotify (ReceiveLocationName, LastPickupDate, NextPickupDate) VALUES (#rlocName, #LastPickupDate, #NextPickupDate);
END
END
And then you could create a polling port that had the following Polling Data Available statement:
SELECT 1 FROM tFileEventNotify WHERE NextPickupDate < GETDATE() AND NotificationSent <> 1
And write up a procedure to produce a message from that table that you could then map to an email sent via SMTP port (or whatever other notification mechanism you want to use). You could even add columns to tFileEventNotify like EmailAddress or SubjectLine etc. You may want to add a field to the table to indicate whether a notification has already been sent or not, depending on how large you make the polling interval. If you want it sent every time you can ignore that part.
One option is to set up a BAM Alert to trigger if no file is received during the day.
Here's one mostly out of the box solution:
BizTalk Server: Detecting a Missing Message
Basically, it's an Orchestration that listens for any message from that Receive Port and resets a timer. If the timer expires, it can do something.
I need to create a data flow for an existing MS SSDT project that inports a flat CSV file into an existing database table. So far so good.
However I would like to reject all entries where the column "code" match values already stored in the db. Even better, if possible, in the case that the column "code" maches an entry in the database, I would like to update the column "description". The important thing is that under no circumstances should duplicate code entries be created.
Thanks
Ok so seeing as I figured it out, I thought someone else might find it useful:
The short answer is that a lookup is needed between the data source and destinations. This "lookup" will filter between matches that need updating and new values that need to go straight into a new table row (see image).
Values that match the database and need updating to the description need to be fed into an "OLE DB command".
Within the "lookup" component we need to do the following:
Go to the general tab and select Redirect rows to no match output
Go to the connection tab and insert the following SQL:
SELECT id, code FROM tableName
Go into the "Columns" tab and check the "id" column on the "Available lookup Columns" table. Also chech the "code" column and drag it to its corresponding "Available Inputs Columns" counterpart to map them to eachother so that the look up can compare them.
-- At this point if you get an error caused by the mapping, try to replace the code in setep 2 with:
SELECT id, CAST(code AS nvarchar(50)) AS code FROM tableName
In the Error Output, ensure that id under "Lookup Match Output" has a description of "Copy Column"
Now we need to configure the "OLE DB command" component:
Go to the "Connection Managers" tab and ensure the component is connected to the desired DB
Go to "Component Properties" and add the following code to the "SQLCommand" property:
UPDATE tableName SET description = ? WHERE id = ?
Note the "?". It is supposed to be there to indicate a parameter must be added to the "Column Mappings" tab, do not replace them.
Finally go into the "Column Mappings" tab and map Param_0 (the first ?) to the "description" column and "Param_1" to the "id" column. No action is needed on the "code" or any other column the db table may contain.
Now give yourself a big pat on the back for having completed a task, that in SQL would normally be one line of code, in about 10 time-consuming steps ;-)
When I reverse engineer from a Progress 9.1e database I get the following error message:
"Column _Desc in table pub._File has value exceeding its max length of precision"
Is this a configuration of the secondary (SQL) broker that I need to set on the Progress server?
Any help, much appreciated!
Progress uses variable length fields for it's character fields, so if a record's field data length is > the SQL width, the result is errors like this.
You'll need to either change the SQL width of the _File table or shorten the data in the offending _file._desc record.
Keep in mind that the _file tables are part of the db schema, so messing with them directly isn't recommended.