topbeat to monitor specific java process - topbeat

I am newbie to the beats. I am using topbeat to monitor the system health.
Up to this point everything is fine.
Now I need to monitor the resource utilization of a java process, so I configured topbeat.yml as: procs: ["java"]
In my linux box there are 4 java processes are running but I am interested in only one java process. So,
Is there any way to monitor specific java process using regex?
Is there any way to differentiate the processes by name [not with pid]?

If you wish to view certain processes then you can use sample topbeat dashboards and in that dashboard there is one search which is for proc stats. From there select proc.name from the available fields and further filter it to select your relevant proc.name
Suggestion from elastic forum: https://discuss.elastic.co/t/topbeat-monitor-specific-java-process/65594/2 Try MetricBeat and see if it helps.

Related

Beckhoff PLCs. Create ADS route of multiples PLCs thought script

Lets say I have 20 PLCs and I want to create a route in all of them.
Is there anyway to do it fast without TwinCAT manager or Visual Studio?
Is it possible to scan every AMS-NetId of the Network and get a list? Currently I have to type them from TwinCAT. And I dont see any button or something to copy an AmsNetId address..

How do I make sure my R session attached is using the compute node instead of the admin (i.e. login) node?

I am using VS Code for R in a remote Unix environment. My goal is to perform regular interactive job while editing the script on the remote server as what people usually do in RStudio locally.
For the HPC server I use, there're a admin node (i.e. login node) and a compute node (mostly for interactive job).
Usually what I did, is to login in via admin node first (via ssh), and then request certain resourses (e.g. memories, cpu, etc) from the compute node, and then do
ssh $SLURM_JOB_NODELIST
which transfer me from 'admin' to 'compute' node in the terminal.
And lastly, I do "R: Create R terminal". However, I wouldn't be able to check if this R terminal is operated on the compute node or the admin node.
There's a way to go around, by using 'radian' package and set "r.alwaysUseActiveTerminal" as "true". However, via this way, my data viewer wouldn't be attached and I couldn't view my data in the 'workspace'. As this,
enter image description here
The trickiest part is I need to use 'ssh' to switch between 'admin' and 'compute node'. While at the same time the whole left panel of VS Code, including the File Viewer, is still based on the 'admin' node.
Any suggestions and advice are welcome! Thanks a lot!
In the R syntax, use Sys.info() to find the hostname of the computer R is running on:
> Sys.info()["nodename"]
nodename
"node002.cluster"

Kafka Connector for Oracle Database Source

I want to build a Kafka Connector in order to retrieve records from a database at near real time. My database is the Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 and the tables have millions of records. First of all, I would like to add the minimum load to my database using CDC. Secondly, I would like to retrieve records based on a LastUpdate field which has value after a certain date.
Searching at the site of confluent, the only open source connector that I found was the “Kafka Connect JDBC”. I think that this connector doesn’t have CDC mechanism and it isn’t possible to retrieve millions of records when the connector starts for the first time. The alternative solution that I thought is Debezium, but there is no Debezium Oracle Connector at the site of Confluent and I believe that it is at a beta version.
Which solution would you suggest? Is something wrong to my assumptions of Kafka Connect JDBC or Debezium Connector? Is there any other solution?
For query-based CDC which is less efficient, you can use the JDBC source connector.
For log-based CDC I am aware of a couple of options however, some of them require license:
1) Attunity Replicate that allows users to use a graphical interface to create real-time data pipelines from producer systems into Apache Kafka, without having to do any manual coding or scripting. I have been using Attunity Replicate for Oracle -> Kafka for a couple of years and was very satisfied.
2) Oracle GoldenGate that requires a license
3) Oracle Log Miner that does not require any license and is used by both Attunity and kafka-connect-oracle which is is a Kafka source connector for capturing all row based DML changes from an Oracle and streaming these changes to Kafka.Change data capture logic is based on Oracle LogMiner solution.
We have numerous customers using IBM's IIDR (info sphere Data Replication) product to replicate data from Oracle databases, (as well as Z mainframe, I-series, SQL Server, etc.) into Kafka.
Regardless of which of the sources used, data can be normalized into one of many formats in Kafka. An example of an included, selectable format is...
https://www.ibm.com/support/knowledgecenter/en/SSTRGZ_11.4.0/com.ibm.cdcdoc.cdckafka.doc/tasks/kcopauditavrosinglerow.html
The solution is highly scalable and has been measured to replicate changes into the 100,000's of rows per second.
We also have a proprietary ability to reconstitute data written in parallel to Kafka back into its original source order. So, despite data having been written to numerous partitions and topics , the original total order can be known. This functionality is known as the TCC (transactionally consistent consumer).
See the video and slides here...
https://kafka-summit.org/sessions/exactly-once-replication-database-kafka-cloud/

SQLite through Mainframe

We have a batch Mainframe JCL and SQLite file in windows Share Path
We need the data in windows share path periodically updated based on a mainframe computation.
So We need to create records in the SQLite database based on Mainframe JCL/Cobol program and then manipulate it using SQLite.
Is this feasible? We are not able to find any leads on how to make use of SQLite from a Mainframe stand point. Any information would be very helpful.
Someone's probably going to have to write a CICS routine for you. It might be a better idea to have a program run at your end at the set time(s) and invoke the Mainframe CICS program through yours using web services.
Since the question says that you're dependent on Mainframe calculations, you will have to make sure that you call the CICS program with all the required parameters and values or make sure that it can fetch those natively. Have the CICS program do the computations for you and return the results.
It might also be possible that what you refer to as "Mainframe JCL / COBOL program" (i.e. batch) already has a CICS (online) counterpart and you wouldn't have to write (or make someone write for you) a new routine again. Your Mainframe team should be able to confirm.
You can create an SSH server and serve the data from a file on the HFS side.
You can also FTP to a Wintel Stack (with NETRC DDNAME).
Yes you can serve WEB/REST from CICS as well (overkill), or use MQSeries (ditto), or even SMTP. Scrape 3270, EHLLAPI (obscure), third party products like XCOM.
Again the USS (OMVS) side, you should be able to sftp (-b) with a script. BPXBATCH is your friend....
A great many shops have been doing these things and more for a very long time.

How to migrate Wordpress between Compute Engine instances

I have recently created a very small Google Compute Engine instance, naively thinking it's one of those easily scalable things Google people keep raving about.
I used the quick deployment feature of Wordpress and it all installed itself nicely, so I started configuring and adding data etc.
However, I then found out that I can't scale an existing instance (i.e. it won't allow me to change the instance type to a bigger one. I don't get why not, but there you go.), so it looks like I need to find a way to migrate my Wordpress installation to a new instance.
Will I simply be able to create a new instance and point it at the persistent disk my small instance currently uses, et voila, Bob's your uncle?
Or do I need to manually get the files and MySql data off the first instance and re-import into an empty new instance?
What's the easiest way?
Any advise or helpful links would be appreciated.
Thanks.
P.S.: Btw, should I try to use the Google Cloud SQL store instead of a local MySql installation?
In order to upgrade your VM:
access the VM's settings in the Developers Console (your project -> Compute -> Compute Engine -> VM instances -> click on the VM's name)
Scroll down to the "Disks" section, and un-check "Delete boot disk when instance is deleted"
Delete the VM in question. Take note that the disk, named after the instance, will remain.
Create a new VM, selecting "Existing disk" under Boot disk - Boot source. In the next box down, select the disk from point 3 above, as well as a bigger machine type.
The resulting new instance will use the existing disk from the old one, with improved hardware / performance.
As for using Cloud SQL in lieu of a VM-installed database, it's perfectly feasible, and allows to adjust the Cloud SQL instance to match your actual use. A few consideration when setting up this kind of instance:
limit the IPs allowed to connect to your Cloud SQL instance to your frontend's IP, and perhaps the workstation's IP or subnet from which you maintain the database out of.
configure Cloud SQL to use SSL certificates.
Sammy's answer covers the important stuff I just wanted to clarify how your files are arranged on the two disks that are attached to your instance:
The data disk contains /var/www/ which is all of the wordpress files. It's mounted on the instance at /wordpress
The boot disk contains everything else, including the MySQL database that was created for the Wordpress installation.

Resources