VSAM file to be updated outside of mainframe - .net-core

I have a requirement to ensure I can update/delete VSAM file record from outside mainframe. i.e., from a dotnet application.
I definitely looked around online, but didn't find lot of information around this topic. Are VSAM files accessible like DB2 databases or MQ series to other systems ?? Any pointers would be helpful

I know that using Stored Procedures (SP) you can access a Vsam File. You have the DSNACICS to invoke a program that access the VSAM File. The SP have to be external, written in Cobol, Java or other language. This SP can be called outside the mainframe.
regards,
Roberto Chirinos

The question you are asking is "Are there general programming interfaces for VSAM on the mainframe that I can access that provide CRUD operations on VSAM files?"
Data bases like Db2 offer general interfaces like JDBC for accessing data managed by that Db2 subsystem. However, VSAM is a an access method that is managed by the operating system. Currently, z/OS does not offer a General User Programming Interface (GUPI) for accessing VSAM externally.
To address this some vendors provide for fee services that can run on the mainframe to make access to this data available. IBM Data Virtualization Manager (DVM) is one such offering. I have not used the offering but the link referenced shows how to access VSAM files (some access is READ-ONLY while others provide READ-WRITE).
Essentially you will need to provide a server-side component to access VSAM files. This could be one of a variety of options. Perhaps the easiest (subjective) is to write a CICS transaction that is accessible via z/OS Connect that will perform the requested operations. IBM ZOAU provides utilities to do this as well.
Bottom line, there are no platform provided RESTful means to access VSAM files but its possible if you put in the coding effort.

Related

Setting up of Amazon Web Services (AWS) Database and EC2

Currently, I have python codes that build machine learning models. The data for these models come from a local SQLite database (my client provides the data to us in S3 bucket, I download them to my machine and push them to the SQLite database). At a very high level, these are 3 steps I perform on my machine:
Download the data from S3 and load to SQLite
Connect to SQLite using python and perform data cleaning, aggregation, and model building in python
Write the results again to the SQLite
Our client has asked us to provide specifications for setting up an Amazon server so that we can run all these processes everyday as an application by click of a button. We planned of providing all the information after implementing the above mentioned end to end steps using our AWS account. I have no prior experience in setting up AWS/ db but want to learn more. These are the following question I have:
Can the above process be replicated on AWS? I use python 2.7 and SQLite db
We don't use any relationship in SQLite db while reading or writing data (like PK constraints etc.,). So is it better directly to read and write from S3
bucket
What are the different components on AWS that i need to have? As per my understanding for running the code I need EC2 (provides CPU, processors etc.,) and for storing, reading and writing the data I need a datastorage component. (Sorry, for usage of layman terms, I'm a newbie and trying to learn things)
Any things I need to keep in mind? Links for resources that can help me the solution.
Regards,
Eswar

Send data to XDS Repository

So I'm trying to figure out how much capabilities comes with Intersystems to send data to an XDS repository. Specifically with using the basic Ensemble package (NO HSF) Assume it's not the one Intersystems delivers, but an external XDS repository.
Is there a built-in way to send a large blob and wrap the ebRim around that blob?
As you can see at http://www.intersystemsbenelux.com/media/media_manager/pdf/1398.pdf, Ensemble does not natively support ebRIM, but it does support XML and XML schemas.
Maybe you could assemble an XML and use that to wrap your blob content.
You can send that over whatever protocol your XDS system provides (xDBC, SOAP, file system etc). Take a look at the items listed on sections "Ensemble Interoperability" and "Ensemble Adapter and Gateway Guides" of http://docs.intersystems.com/ens20122/csp/docbook/DocBook.UI.Page.cls for a full list of connectivity options.
Regards,
There is healthshare foundation product which has XDS connectivity
See this good answer on google groups https://groups.google.com/forum/m/?fromgroups#!topic/Ensemble-in-Healthcare/h7R300H68KQ
Or healthshare part of their website
HSF (HealthShare Foundation) XDS.b connectivity for query and retrieve and also the Provide and Register Operation.
Ok, so I re-read your question and have an answer for you. I think what you are trying to say is that you have Ensemble, not HSF, and you still want to be able to send documents (XDS provide and Register).
I did some testing with the Open Source Integration mirth and stumbled across an example channel of theirs, and it is doing a provide and register with straight up SOAP calls to the end point.
Basically, build the required soap envelope accordingly, then send a PDF or document to the repository using MTOM.
This is what makes HealthShare its money, encapsulating all that manual construction of objects that need to be sent to endpoints.
Anyway, a screenshot of the Mirth channel destination make give you an understanding:
http://www.integrationrequired.com/wp-content/uploads/2013/02/Capture.PNG

Retrieving queue depth using hermesJMS or shell script of WebSphere MQ

I have hermesJMS setup and soapUI. I'd like a small script that can go in either via hermesJMS or another way to retrieve the queue depth of a particular queue.
Is there a way to do this easily?
Thanks
The JMS specification does not provide an API for object inquiry, however IBM provides one using native Java classes and the C API using Programmable Command Formats, or PCF for short. The PCF reference docs are here.
If you have installed the WMQ client code (free download with registration) you will have the sample programs on your laptop. By default, these reside in C:\Program Files (x86)\IBM\WebSphere MQ\tools\pcf\samples for Windows or in /opt/mqm/samp/ for UNIX/Linux. Take a look at PCF_ListQueueNames.java for a starting point. If you were to substitute MQCMD_INQUIRE_Q for MQCMD_INQUIRE_Q_NAMES in that program you'd be very close to what you require.
Alternatively since you requested alternatives, you might look at SupportPac MO72. This SupportPac can be used as a client version of runmqsc so that you can, from a central server, write scripts that query your entire WMQ network. Of course, it also works in local bindings mode. Among the other features that make MO72 great for scripting is an option to format output to one line per object. This lets you grep out the line of interest, then strip out the value of interest.

Is the filesystem for Raven DB encrypted?

I'm just trying to determine if the files on the filesystem used by Raven DB are encrypted or not? Can someone just open the files on the filesystem and convert them from binary to ASCII directly, or are they encrypted?
I am trying to convince our management to give RavenDB a shot, but they have concerns about security. They gave the example that you can't just open up an MS SQL db file, convert it from binary to ASCII, and read it. So I am trying to verify if RavenDB prevented that kind of thing as well?
Well, personally I think that your management sucks if they come up with such straw-man arguments.
To answer your question: No, you can't just open any file inside ravens data folder with Notepad and expect to see something meaningful. So, for the ones that don't know how to program, yes they are encrypted.
To convice your management you can tell them that raven uses the same encryption algorithm as Microsofts Exchange Server does. If they want to dig deeper - it's called Esent.
RavenDb storage is not encrypted. You can open it with notepad and see some pieces of data. At the same time I do not think that MS SQL encrypts files by default either.
RavenDB added encryption in mid-2012. Get RavenDB's “bundle:encryption” and then make sure your key is properly encrypted in the .NET config file or whatever.
http://ravendb.net/docs/article-page/3.0/csharp/server/bundles/encryption
http://ayende.com/blog/157473/awesome-ravendb-feature-of-the-day-encryption
SQL Server 2008 does have encryption, but you need to prepare the DB instance beforehand to enable it, then create the DB with encryption enabled and then store data.
If you haven't, you could just copy the DB off the machine and open it in a tool that does have access to it.
With RavenDB, you can tick the box and off you go! (although I do not know the intricacies of moving backups to another machine and restoring them).
In relation to the point your management made, this is a relatively pointless argument.
If you had access directly to the file of a DB, it's game over. Encryption is your very last line of defence.
[I don't think hackers are going to be opening a 40GB file in Notepad .. thats just silly :-)]
So instead of ending up at the worst case, you have to look at the controls you can implement to even get to that level of concern.
You need to work out how would someone even get to that file (and the costs associated with all of the mitigation techniques):
What if they steal the server, or the disk inside it?
What if they can get to the DB via a file share?
What if they can log onto the DB server?
What if an legitimate employee syphons off the data?
Physical Access
Restricting direct access to a server mitigates stealing it. You have to think about all of the preventative controls (door locks, ID cards, iris scanners), detective controls (alarm systems, CCTV) and how much you want to spend on that.
Hence why cloud computing is so attractive!
Access Controls
You then have to get onto the machine via RDP or connect remotely to its file system via Active Directory, so that only a select few could access it - probably IT support and database administrators. Being administrators, they should be vetted and trusted within the organisation (through an Information Security Governance Framework).
If you also wanted to reduce the risk even further, maybe implement 2 Factor Authentication like banks do, so that even knowing the username and password doesn't get you to the server!
Then there's the risk of employees of your company accessing it - legitimately and illegitimately. I mean why go to all of the trouble of buying security guards, dogs and a giant fence when users can query it anyway! You would only allow certain operations on certain parts of the data.
In summary ... 'defence in depth' is how you respond to it. There is always a risk that can be identified, but you need to consider the number of controls in place, add more if the risk is too high. But adding more controls to your organisation in general makes the system less user friendly.

ASP.NET: Location for storing files that should be shared between several web-applications

I have two web-applications. One is an outwards-facing application that will be accessible from the internet. The other is an application to manage the first, that will only be accessible from the intranet.
They keep their data in files on the filesystem (I think a database would be overkill for these applications).
The management-application should be able to write some files that the outwards-facing application can read (data-files that are used to supply responses to requests from the internet) and the outwards-facing application should be able to write a file that the management can read (log-file).
My question is: what is the best place to store these files?
Application Data/[Company Name]/[Product Name]?
An APP_DATA under one of the web-applications?
Somewhere else?
Some factors to consider are: What extra permissions do the solution need? Can the web-applications discover the location without needing to know where the other application has been installed?
Thanks in advance for any suggestions!
I know you said a database would be overkill, but a two-sided app with one side potentially giving access to internal systems, would be much more secure (though not entirely secure) if resources were stored in a DB. It just gives an extra layer. I think Internet users should be given the bare minimum of permission on the host file-system (via a web layer such as NETWORK SERVICE or not).
Otherwise, why not a "sandbox" path, on a physically separate device (that may be disconnected if needed, eg. suspicious activity) such as a USB hard disk?

Resources