Oracle Golden Gate Replicat - oracle-golden-gate

can anyone help me? I want to run the replicat on my golden gate for windows. This is my replicat parameter
-- Replicat group --
replicat rep1
-- source and target definitions
ASSUMETARGETDEFS
--target database login --
userid ggtarget, password oracle
--file for discarded transaction --
discardfile C:\app\<name>\product\12.1.2ggtarget\oggcore_1\dirdat\rep1_discard.txt, append megabytes 10
--ddl support
DDL
--Specify table mapping --
map EAM.*, target EAM.*;
when i start the replicat, it say that the replicat is starting. but when i type info all, the replicat is stopped and the status said that is not currently running. how can i make it run?

Check the ggserr.log file for error messages.
The line with MAP command is incomplete. There should be something after the dot,
like:
map EAM.*, target EAM.*;
or:
map EAM.table, target EAM.table;
or:
map table, target table;

Check the report file for error
eg: view report rep1.rpt
It should contain the details of how the replicat parameter file is being parsed when trying to start the replicat.
However if you dont see any error code in report file or ggserr.log check your environment variables and parameter files location

Try row with discard file.
Example
DISCARDFILE .dirrpt/discard.txt, APPEND, MEGABYTES 20
In You congif
discardfile C:\app<name>\product\12.1.2ggtarget\oggcore_1\dirdat\rep1_discard.txt, append megabytes 10
I think you lost comma after append

Related

Node-Red Get File from SFTP

I am using Node-red to connect to SFTP and to get a file.
Th Problem is that it returns always this error:
"Error: sftp.realPath: The "string" argument must be of type string or
an instance of Buffer or ArrayBuffer. Received type number
(1645531908336) 1645531908336"
Note: I did not get this error on "List" and i have tried many SFTP servers. Always the same error
this is the image of my node
1645531908336 looks like it is the timestamp from a standard msg.payload value from an inject node.
Assuming this is the node-red-contrib-better-sftp node then it looks like it is badly written and will always take the value of msg.payload and override any configured path and filename in the node config. This is a really bad design and should only use the input message if the value is left blank in the config.
https://github.com/sublime93/node-red-contrib-better-sftp/blob/10f67d46f3d762b254f7a6f22539ba4c95d6331e/transports/sftp/index.js#L148
So the quick option for you to test is to remove the msg.payload from the inject node you are testing with.
You should also raise an issue with the nodes author to fix this behaviour.
EDIT: There already is an open issue (from mid 2020) https://github.com/sublime93/node-red-contrib-better-sftp/issues/9

If i execute the same test case two times, is there a way to generate report.html with a timestamp in the file name?

I have a question related with report.html that maybe someone could help me to clarify
If i execute the same test case two times, is there a way to generate report.html with a timestamp in the file name so after 2 executions i have 2 reports.html.
for example:
report_20200529_15:00:00.html
report_20200529:15:05:00.html
Thanks in advance for your help
This is covered in the robot framework user guide, in a section titled timestamping output files:
All output files listed in this section can be automatically timestamped with the option --timestampoutputs (-T). When this option is used, a timestamp in the format YYYYMMDD-hhmmss is placed between the extension and the base name of each file.

How to suppress -mmax value exceeded.Automatically increasing from old value to new value.<5409>?

in prokb,its mentioned
In 10.0B02 and above, the client session startup parameter -noincrwarn was reintroduced
to allow the selective suppression of the above four warning messages ONLY. Since the
execution of the 4GL statement: SESSION:SUPPRESS-WARNINGS = YES. suppresses ALL warning
messages during the session.
Where and how could i set i this startup parameter -noincrwarn to suppress this warning
message?
"SESSION:SUPPRESS-WARNINGS = YES." doesn't do much of anything useful. Or at least it didn't the last time I tested it.
The -mmax warning is harmless. It is a "soft" limit that is dynamically allocated and expanded as needed. You can ignore it. Or if the .lg file entries really bother you, you can simply increase it to a reasonable value. I routinely set it to 8192 for character sessions, 32768 for Windows. The default, as JensD says, is ludicrously low.
Startup parameters, such as -noincwarn, can set in a number of ways:
1) Via the command line. If your application starts via a script it will eventually invoke progress via "pro", "mpro", progress, prowin32, proapsv or some other executable (you can potentially link your own objects and create custom executables...) The command line that invokes Progress will have a number of parameters. You could add it there. Windows example:
#echo off
set DLC=\Progress\OpenEdge
%DLC%\bin\prowin32 -db mydb -p start.p -noincwarn
(On windows it is also common for the shortcut properties to have the command line listed.)
2) In a "pf" file. "PF" files are parameter files. They contain a list of parameters in a text file. This makes it easy to share and manage parameters between many scripts. To use a parameter file you need at least one -pf filename.pf parameter. Unix example:
#!/bin/sh
DLC=/usr/dlc
export DLC
${DLC}/bin/_progres -db mydb -pf mypf.pf
Where "mypf.pf" might contain:
# mypf.pf
-p start.p
-noincwarn
There is a global .pf file in the Progress install directory called startup.pf. You could also add it to that.
3) In an "ini file". Sort of like the pf file but more complicated. Indicated by the -ininame startup parameter. Can also be influenced by registry keys.
Why not removing or trying another value for -mmax? If you're moving from an old version of Progress it might be that -mmax is set very low.
The Maximum Memory (-mmax) client session parameter specifies the maximum amount of memory allocated for r-code segments, in kilobytes.
Source: http://knowledgebase.progress.com/articles/Article/P11351?popup=true
Large memory consumption might depend on complicated business logic (things like very large and or deeply nested procedures) so you might consider looking into that.
However a much easier fix would be to increase the value. Default is 3096, meaning each client "only" gets 3 Mb for this. Not a very large amount with today's standards.
If you really only want to suppress the message. Set -noincrwarn in your client side startup script (or corresponding .pf-file/startup.pf).
Hosting a WPF element (windows Presentation Foundation) in an OpenEdge application can cause application to crash if any message cover the window. It is also the case of this message.
In order to suppress any messages including message 5409 ()
According to article "HOW TO SUPPRESS WARNING MESSAGES (5407),(5408),(5409),(5410) FROM DISPLAYING ON CLIENT SCREENS."
I used with expected results SESSION:SUPPRESS-WARNINGS = YES. As the first line in the starting procedure of the aplication.
Using -noincrwarn as the session startup parameter had no effect in Open Edge 11.4
Supress openedge messages:
http://knowledgebase.progress.com/articles/Article/P79795?popup=true
.NET related error for OpenEdge-WPF hibrid application "Invisible or disabled control cannot be activated"
https://social.msdn.microsoft.com/Forums/windows/en-US/e8cf6431-2a59-4335-8b36-fc8f35083823/invisible-or-disabled-control-cannot-be-activated?forum=winforms

Autosys file watcher for a particular filename on Windows

I am trying to write a file watcher job in autosys that would watch out for a particular file. The file name format would be filename_ddmmyyyy.
The requirement is that the file comes at 7.15am everyday and the file watcher job starts running at 6.50am and the runs till 8am. If the file is received by then, job is successful else an alert is raised.
Now what I am trying to do is to watch out the file filename_ddmmyyyy for a particular day. e.g. if today is 22nd Feb 2013, the file name will be filename_22022013 and this is the file that I am looking for. If I use wildcards like filename_*, it would look for all possible files which I don't want.
I am not sure how to do this in Windows.
Any help would be much appreciated.
Let me know in case of questions.
You will need to use the profile job attribute to initalize variables when the job starts. One of these variables will need to be the date pattern you are looking for (you'll need another process that outputs that dynamically). Then once you set it to a variable in your profile script, you can refer to that variable name from within the watch_file attribute.
Create global variable as variable with date and us that variable :
example:filename_$${GV_DATE}
GV_DATE: ddmmyyyy
Pretty late to answer, but here is an answer without using global variable. You can use formatted system date variable in the file name.
File_to_watch: filename_%date:~10,4%%date:~4,2%%date:~7,2%

Load Freebase full dump file to Virtuoso

I have downloaded the full RDF Freebase dump file 'freebase-rdf-2012-12-09-00-00.gz'(7.5GB) from this link http://download.freebaseapps.com/
This data dump uses the the Turtle RDF syntax as defined here http://wiki.freebase.com/wiki/Data_dumps
How can I load this file into Virtuoso (06.04.3132) ?
I tried to use this command
SQL> DB.DBA.TTLP_MT (file_to_string_output ('freebase-rdf-2012-12-09-00-00.gz'), '', 'http://freebase.com');
but it finished in short time. The following request returned only 2 rows(triples) from the source file and no exceptions in the log.
SELECT ?a ?b ?c from <http://freebase.com> where {?a ?b ?c}
http://rdf.freebase.com/ns/american_football.football_historical_roster_position.number
http://rdf.freebase.com/ns/type.object.name Number
http://rdf.freebase.com/ns/american_football.football_historical_roster_position.number
http://rdf.freebase.com/ns/type.object.type http://rdf.freebase.com/ns/type.property.
2 Rows. -- 78 msec.
By the way, how long may it take to load such a big file (8 GB RAM or 24 GB RAM)?
May this dump file be loaded in TDB (via tdbloader), Sesame OpenRDF(via load) or OWLIM SE repository without modification?
And will I get a response from my SELECT SPARQL queries(not very complex) after loading in reasonable time after all?
Thank you!
I've got the reply from [freebase-discuss] mailing list:
This Freebase dump should be unpacked, splitted and run thru fix scripts. More details here
http://people.apache.org/~andy/Freebase20121223

Resources