file location identification in informatica - unix

I have few file names and need to identify which mapping/workflow is generating those files? Is it possible to check this in repository or at UNIX level. your advise may help me.

If you have access to the PowerCenter repository database, you can get information about file connections associated with sessions (e.g. source or target files) from the Metadata Exchange (MX) views:
REP_SESSION_FILES contains file connections associated with reusable sessions
REP_SESSION_INST_FILES contains file connection information for session instances associated with workflows
Source: PowerCenter 8.6.1. Repository Guide (login required)

Depending upon how you have named your objects, it may be possible to identify those files against particular mappings.
For example, Informatica will generate several Cache files in Cache directory. If you are using cached Lookup, depending upon the names of the lookup (or the name that you have used in named cache) you may identify which lookup has created that file. This approach is also applicable with Aggregator cache or Rank cache.
It would be easier if you can maintain a offline mapping between the transformations names and which mapping contains those transformations

Informatica Support team has come up with a tool called 'Metaquery' which could be used to get Metadata Information. This tool might give you the details you are looking for. This could be downloaded from Informatica Marketplace or their support site.

Related

How to keep generation number when copy file from gcs bucket to another bucket

I'm using gcs bucket for wordpress (wp-stateless plugin)
after create and upload media file to a bucket. I copy it to other bucket (duplicate). But generation number of each object has been change (maybe random).
My question is: How to keep generation number same bucket source like in destination bucket?
Thanks in advance.
Basically, there’s not an official way of keeping the same version and generation numbers when copying files from one bucket to another. This is WAI and intuitive because the version number refers to this object (which resides on this bucket), when you copy it to another bucket, it's not the same object (it's a copy) so it cannot keep the same version number.
I could think of a workaround, keeping somewhere your own version of the objects and then through the API make an organized copy. This would mean you would be dumping the bucket but you would need to have a list of all the objects and its versions and then add them in sequential order (sounds like a lot of work). You could keep your own versioning (or the same versioning) in the metadata of each object.
I would recommend that if your application depends on the object’s versioning, to use custom metadata. Basically, if you did your own versioning using custom metadata, when copying the objects to a new bucket, it would keep the same metadata.
There is already a feature request created about this. But, it has mentioned that it's currently infeasible.
However, you can raise a new feature request here

How to lock a python variable file in robot framework?

I need to store my user id and password in a python variable file in robot framework. This credential will be utilized to login to website to test it. No other person should be able to view my credential (even in git also). Hence, I have to lock this variable file. Is there any way to lock this python variable file?
Due to their nature Source Code Repository systems are public in nature. So, either you lock the repository or it's open to everyone. This makes storing any type of sensitive data in such a system a bad idea.
For these types of information it is typically best to have a separate file and refer to that file when executing the run. In Robot Framework this can be done using Variable files. These can be referred to using the Variables myvariables.<ext>. There is support for Python and YAML files.
Securing these files can be as easy as placing them in a location that only few can access to setting up tools to store them encrypted and only make them available when having the right key. This is a separate topic on it's own with it's own challenges.

Java FX app - unique id for each distibution

I have Java FX app which is available for download on my site. I am looking for a way how to remotely and uniquely identify each downloaded application. Is it possible to store the id (for ex. in txt file) into a package of the Java FX app immediately before download?
Thanks for any suggestions
Each time you distribute it, you could try signing and timestamping the jar file for distribution. That way you can ensure that the file is not tampered with and validate it's signature and timestamp either locally or in a callback to a service you provide if necessary.
Also consider java-webstart cited here.
Yes, signing and webstart technologies can be used together if desired. Those two technologies can be used separately or together, so you can choose what is appropriate for your app. See the javapackager documentation for more details of the packaging process for web start (go through the documentation and refer to the sections that reference jnlp). Be aware that web start currently only works with Oracle JDK (as far as I know).
For your purposes, you would create a script that executes on each download request to generate a unique id or timestamp (or gets a timestamp from a timestamp service) and adds that to the package before signing and offering the package for download. You could add the download instance UUID and timestamp together with the referring IP address or user id (if you have a login system on your website), to a server-side database to track who downloaded what at what time.
If using webstart, you use a JNLP deployment as mentioned in the linked documentation. There are options for the packaging the JNLP to interact with some Javascript on a webpage, which can reduce network traffic and speed up the download and usage process. Sophisticated deployment mechanisms can dynamically generate that download package, and the download page with Javascript calls which embed JNLP data. Details or samples of such systems are outside the scope of the information I can provide here.

Where is better to store uploaded files in DB as BLOB or in folder with restrictions?

I'm working with the FileUpload in my project. And this project would be high visited (it's not my ambitions, just because web application does work with a payment system, that's why it will be under high-load). And I wonder, what's better for a storing the user's files? My project is based on ASP.NET.
I suggest two variants:
save as/load a BLOB object into/from database
save/load to/from a folder where the files will locate and save info about files in the table for owner recognizing, the table design in BNF:
<user_files> ::= ( <id ::= int, primary_key, auto_increment, indexed><user_id ::= int><file_guid ::= varchar(255)>) | nil
I prefer BLOB , but afraid of a future high-load. Because, fetching data from the database requires more CPU-time and memory allocations, because:
I need to use a connector, which will open a new socket to connect to DB localhost
Then must call stored-procedure for a getting BLOB object
at client-side, I must get the result from some classes from the connector
I must deserialize it
and only then just to send the file to a user in uncompressed and not corrupted state, so user can later open it in some editor (files often would be images and ms-office documents)
As I thought all these operations may decrease the server work and will require more time, I think it would be slow for a 2000 users online, which will exchange the documents very quickly
As for the storing files on the filesystem, I see only problem in:
securing correctly the access of files, because different users must not see others docs and they must be hidden for the other users. I'm afraid, because the folder to which users are uploading files is seen for the system user of Windows for the IIS (IISUser...), because otherwise users won't be able to upload anything, so the folder will be public. I see only solution to make a Windows Service and to use IIS folder for the uploads as temporary. Windows Service will get files from it and place to the secure folder, where users from web would be unable to see it.
But, maybe, I'm going wrong with my ideas, that's why I'm asking you a piece of advice, because I want to make system more perfectly.
Thank you!
securing correctly the access of files
If you run into this situation you are already violation the OWASP security guidelines, since your files are insecure direct object references. This means that users can access files directly, because you opened a complete sub folder on IIS (like www.mysite.com/files/some_file.pdf) and your files probably have a name.
What you should do instead is:
Register a file in the database with a unique; not its data, just its name and the user who uploaded it (optionally including rights or roles).
Store the file on disk where the file name is the database identifier.
Don't allow direct access but write a special HttpHandler that takes in the id of the document (just as you would do when storing the files inside your database).
When taking this approach, you achieve the following:
Files have a unique number, which prevents them from having naming conflicts on disk.
The HttpHandler can check the database of the user that downloads that file has the proper rights to do so.
Because IDs are used, you are not vulnerable to canonical representation attacks, where the attacker does a request like this: www.mysite.com/file.ashx?file=..\web.config.
So from a security perspective, there is no problem in storing files on disk instead of your database.
Storing in a database will scale much better over time. If you use the folder solution, and someday you need or decide to use a cluster, synchronizing the files throughout the server farm will be hellish.
Even though fetching stuff from a database may be more CPU intensive, it does simplify a lot of things (your code will surely be more maintainable and portable), and you can always count on hosting and processing costs diminishing over time.
You can also cache stuff for speed. Either way I hope those files don't change a lot after being uploaded.

How do permissions on a PlasticSCM repository work in a DVCS scenario

So I've been working on a rather large project and using PlasticSCM as by VCS. I use it with a DVCS model, but so far it's pretty much just been me sync'ing between my office machine and home.
Now we're getting other people involved in the project, and what I would like to do is restrict the other developers to specific branches so that only I can merge branches into /main.
So I went to my local repository, and I made the permissions changes (that part's pretty straight forward). But now how does that work with the other developers? When they sync up, are the permissions replicated on their local repositories? If they attempted to merge into /main on their local repository, does it allow that, and then they get an error when they attempt to push the changes to my repository?
This is my first foray into DVCS so I'm not quite sure how this kind of thing works.
Classic DVCS (Mercurial, Git) don't include ACL, meaning a clone wouldn't keep any ACL restriction.
This is usually maintain through hook on the original repo (meaning you might be able to modify the wrong branch on a cloned repo, but you wouldn't be able to push back to the original repo).
As the security page mentions, this isn't the case for PlasticSCM, and a clone should retain the ACL (caveat below) set on an object, which will inherit said ACL through two realms: the file system hierarchy (directory, subdirectories, files) and the repository object hierarchy:
The caveat in a DVCS settings is that there must be a mechanism in place to translate users and groups from one site to another.
The Plastic replication system supports three different translation modes:
Copy mode: it is the default behaviour. The security IDs are just copied between repositories on replication. It is only valid when the servers hosting the different repositories involved work in the same authentication mode.
Name mode: translation between security identifiers is done based on name. In the sample at Figure above suppose user daniel has to be translated by name from repA to repB. At repB the Plastic server will try to locate a user with name daniel and will introduce its LDAP SID into the table if required.
Translation table: it also performs a translation based on name, but driven by a table. The table, specified by the user, tells the destination server how to match names: it tells how a source user or group name has to be converted into a destination name. Figure below explains how a translation table is built and how it can translate between different authentication modes.
Note: a translation table is just a plain text file with two names per line separated by a semi-colon “;”. The first name indicates the user or group to be translated (source) and the one on the right the destination one.

Resources