We have a batch Mainframe JCL and SQLite file in windows Share Path
We need the data in windows share path periodically updated based on a mainframe computation.
So We need to create records in the SQLite database based on Mainframe JCL/Cobol program and then manipulate it using SQLite.
Is this feasible? We are not able to find any leads on how to make use of SQLite from a Mainframe stand point. Any information would be very helpful.
Someone's probably going to have to write a CICS routine for you. It might be a better idea to have a program run at your end at the set time(s) and invoke the Mainframe CICS program through yours using web services.
Since the question says that you're dependent on Mainframe calculations, you will have to make sure that you call the CICS program with all the required parameters and values or make sure that it can fetch those natively. Have the CICS program do the computations for you and return the results.
It might also be possible that what you refer to as "Mainframe JCL / COBOL program" (i.e. batch) already has a CICS (online) counterpart and you wouldn't have to write (or make someone write for you) a new routine again. Your Mainframe team should be able to confirm.
You can create an SSH server and serve the data from a file on the HFS side.
You can also FTP to a Wintel Stack (with NETRC DDNAME).
Yes you can serve WEB/REST from CICS as well (overkill), or use MQSeries (ditto), or even SMTP. Scrape 3270, EHLLAPI (obscure), third party products like XCOM.
Again the USS (OMVS) side, you should be able to sftp (-b) with a script. BPXBATCH is your friend....
A great many shops have been doing these things and more for a very long time.
Related
My place of business currently uses Reflection for Unix and OpenVMS to handle a database of customers. I access this database directly through the Reflection emulation. The only way to get data out of Reflection is to navigate to a single customer via keyboard input and print the information to a .txt.
Is there anyway I can access the VM other than through Reflection with the end goal of automating retrieval of customer information from a Java script executed outside of the Reflection environment? This is the information I can gather via the Reflection interface about what I am connecting to:
At the bottom of the Reflection interface - VT500-7 -- HOST_NAME via SECURE SHELL
Via the Connection Setup drop-down:
Host name: HOST_NAME
SSH config scheme: AutoKeyLogin
User name: username
Via the Security... button:
General tab:
Port number: 22
User Authentication: [x] Public Key
[x] Password
User Keys tab:
Use Name Type Location
[x] username1user DSA C:\Documents\PathToSSHKey\.ssh
Host Keys tab:
Host Type Fingerprint
HOST_NAME, 111.1.111.11, 22 DSA 39:14:f3:123:fds:restOfFingerprint
There is more information available if the solution is possible but I have just not provided enough to solve it, so please ask.
Given that I have the host name, port, .ssh, and host key, is it possible to connect to and read from the VM that I am otherwise connecting to normally via the Reflection emulator?
NO. Reflection (other example is PuTTY) is just a dumb-terminal emulator, here using the (secure) SSH protocol to connect to some Operating System. From the information provided we cannot even tell which OS. Maybe OpenVMS maybe some Unix. Most certainly not a 'VM', but a physical box. Maybe a Alpha, Integrity, Sun, IBM or Intel server.
IF, big if, it is OpenVMS you would possibly see something like this flash by on entry:
XXX - HP rx2600 (1.50GHz/6.0MB) OpenVMS IA64 V8.3-1H1
Last interactive login on Thursday, 7-DEC-2017 13:23:19.83
Last non-interactive login on Wednesday, 6-DEC-2017 12:35:45.80
Most likely username uses is set up to always start a (shell) script which starts a menu from which a program is activated, which knows how to access data record. IF is it OpenVMS then the actual data is likely stored in RMS (indexed) files, but it could in a proper (Oracle RDB or RDBMS) database.
If bulk access to the data is needed then you need to talk to the system/application manager for the system 'HOST_NAME' and ask them about the application and its database.
You may find that there is FTP, ODBC or JDBC or natice DB (OCI?) access to the data avaiable already, or that this can be requested. Likely tools in this space are ConnX, Attunity Connect, and such.
First you'll need to find out which OS/Platform/Version, which application (3rd party? home grown? 4GL? Cobol? Basic? and ultimately, which database/storage method.
That's not to say that some terminal emulator cannot be 'tricked' (google -
screen scraping) to be programmed to fetch a series of data on command, but that will always be error prone and laboriously for limited volumes.
You are better of trying to get proper data access.
Good luck! You'll need some.
Hein
I am newbie to the beats. I am using topbeat to monitor the system health.
Up to this point everything is fine.
Now I need to monitor the resource utilization of a java process, so I configured topbeat.yml as: procs: ["java"]
In my linux box there are 4 java processes are running but I am interested in only one java process. So,
Is there any way to monitor specific java process using regex?
Is there any way to differentiate the processes by name [not with pid]?
If you wish to view certain processes then you can use sample topbeat dashboards and in that dashboard there is one search which is for proc stats. From there select proc.name from the available fields and further filter it to select your relevant proc.name
Suggestion from elastic forum: https://discuss.elastic.co/t/topbeat-monitor-specific-java-process/65594/2 Try MetricBeat and see if it helps.
does anybody know where OpenStack Swift stores the "Rings"? Is there a distributed algorithm or is it just one table somewhere on some of the Storage Nodes with information about all (!) the physical object locations (I cannot believe that because from my understanding of Object Storage, it should scale to Exabytes, and this would need lots of entries in such a table...)?
This page could not help me: http://docs.openstack.org/developer/swift/overview_ring.html
Thanks in advance for your help!
Ring Builder
The rings are built and managed manually by a utility called the ring-builder. The ring-builder assigns partitions to devices and writes an optimized Python structure to a gzipped, serialized file on disk for shipping out to the servers. The server processes just check the modification time of the file occasionally and reload their in-memory copies of the ring structure as needed.
so, it's stored in all servers.
If you were asking the path of ring,gz files it is under /etc/swift by default
Also these ring files are can be updated using the .builder files when swift rebalance is run.
I have a SQLite database that is used by two processes. I am wondering, with the most recent version of SQLite, while one process (connection) starts a transaction to write to the database will the other process be able to read from the database simultaneously?
I collected information from various sources, mostly from sqlite.org, and put them together:
First, by default, multiple processes can have the same SQLite database open at the same time, and several read accesses can be satisfied in parallel.
In case of writing, a single write to the database locks the database for a short time, nothing, even reading, can access the database file at all.
Beginning with version 3.7.0, a new “Write Ahead Logging” (WAL) option is available, in which reading and writing can proceed concurrently.
By default, WAL is not enabled. To turn WAL on, refer to the SQLite documentation.
SQLite3 explicitly allows multiple connections:
(5) Can multiple applications or multiple instances of the same
application access a single database file at the same time?
Multiple processes can have the same database open at the same time.
Multiple processes can be doing a SELECT at the same time. But only
one process can be making changes to the database at any moment in
time, however.
For sharing connections, use SQLite3 shared cache:
Starting with version 3.3.0, SQLite includes a special "shared-cache"
mode (disabled by default)
In version 3.5.0, shared-cache mode was modified so that the same
cache can be shared across an entire process rather than just within a
single thread.
5.0 Enabling Shared-Cache Mode
Shared-cache mode is enabled on a per-process basis. Using the C
interface, the following API can be used to globally enable or disable
shared-cache mode:
int sqlite3_enable_shared_cache(int);
Each call sqlite3_enable_shared_cache() effects subsequent database
connections created using sqlite3_open(), sqlite3_open16(), or
sqlite3_open_v2(). Database connections that already exist are
unaffected. Each call to sqlite3_enable_shared_cache() overrides all
previous calls within the same process.
I had a similar code architecture as you. I used a single SQLite database which process A read from, while process B wrote to it concurrently based on events. (In python 3.10.2 using the most up to date sqlite3 version). Process B was continually updating the database, while process A was reading from it to check data. My issue was that it was working in debug mode, but not in "release" mode.
In order to solve my particular problem I used Write Ahead Logging, which is referenced in previous answers. After creating my database in Process B (write mode) I added the line:
cur.execute('PRAGMA journal_mode=wal') where cur is the cursor object created from establishing connection.
This set the journal to wal mode which allows for concurrent access for multiple reads (but only one write). In Process A, where I was reading the data, before connecting to the same database I included:
time.sleep(0.5)
Setting a sleep timer before a connection was made to the same database fixed my issue with it not working in "release" mode.
In my case: I did not have to manually set any checkpoints, locks, or transactions. Your use case might be different than mine however, so research is most likely required. Nevertheless, I hope this post helps and saves everyone some time!
We have a vendor that sends CSV files as email attachments. These CSV files contain statuses that are imported into our application. I'm trying to automate the process end-to-end, but it currently depends on someone opening an email, saving the attachment to a server share, so the application can use the file.
Since I cannot convince the vendor to change their process, such as offering an FTP location or a Web Service, I'm stuck with trying to automate the existing process.
Does anyone know of a way to programmatically open an email from a POP3 account and extract an attachment? The preferred solution would reside on a Windows 2003 server, be written VB.NET and secure. The application can reside on the same server as the POP3 server, for example, we could setup the free POP3 server that comes with Windows Server and pull against the mail file stored on the file system.
BTW, we are willing to pay for an off-the-shelf solution, if one exists.
Note: I did look at this question but the answer points to a CodeProject solution that doesn't deal with attachments.
Try Mail.dll email component, it's very affordable, supports attachments national characters and is easy to use, it also supports SSL:
Using pop3 As New Pop3()
pop3.Connect("mail.server.com")
pop3.Login("user", "password")
Dim builder As New MailBuilder()
For Each uid As String In pop3.GetAll()
' Receive email message'
Dim mail As IMail = builder.CreateFromEml(pop3.GetMessageByUID(uid))
'Write out received message'
Console.WriteLine(mail.Subject)
'Here you can use mail.Attachmets collection'
For Each attachment As MimeData In mail.Attachments
Console.WriteLine(attachment.FileName)
attachment.Save("c:\" + attachment.FileName)
' you can also use attachment.Data here'
Next attachment
Next
pop3.Close(true)
End Using
You can download it here: http://www.lesnikowski.com/mail.
possible duplication of Reading Email using Pop3 in C#
Atleast, there's a shed load of suggestions there that you may find useful
I'll throw in a late suggestion for a more generalized "download POP3 messages and extract attachments" solution using existing software and minimal programming. I needed to do this for a client who switched to receiving faxes via email and was not pleased with manually saving the attachments to a location where they could be imported into an application.
For downloading messages on *nix systems fetchmail seems to be the standard and is very capable, but I chose mpop for both simplicity and Windows compatibility (but it is cross-platform). If mpop hadn't done the trick for me, I probably would have ended up doing something with the Python-based getmail, which was created when fetchmail's development stalled for a time (it's since resumed).
Mpop is controlled either via command line or configuration file, so I simply created multiple configuration files and specify via command line which file to load. I'm using it in "Exchange pickup directory" mode, which means it simply downloads the messages and drops them as text (.eml) files in a specified directory.
For extraction of the message attachments, UUDeview appears to be the standard (I'm using the Windows port of UUDeview) across just about any system you could want with just about any features you could want. My main alternative to this was a much-less-capable Python script that I'd developed for a different client back in 2007, but I'm happy to go with a precompiled executable over either installing Python or packaging with any of the Python-to-exe options.
Finally there's the configuration - along with the two mpop configuration files mentioned above (which I could do away with by using command-line options), I also have two 2-line .cmd files launched every 10 minutes by scheduled task - the first line to launch mpop to download into a working directory and the second line to launch UUDeview and extract attachments of specified types (.pdf or .tif) then delete each file from which it extracted attachments. Output is sent to another directory from which staff can directly attach files as needed.
This is overall not the most elegant way to reach these ends, but it was quick, simple, functional and reasonably robust - at each stage if something goes wrong it fails such that no data is lost. The only places where data could be lost are any non-attachment messages being sent to the dedicated fax email addresses, and even those will sit in the processing directory and be caught eventually.