Is there a way to do a offline directory changes through ldif file? - ldif

I have a ldif file containing multiple modify operations. I would like to apply them offline, as ldif-import can import data in a off-line manner.
I've looked at ldif-import, but I understand it can only apply add operations, not modify operation. I tried with this tool, but every operation was rejected.
How can I achieve this ?

The import-ldif tool is meant to do fast import of data to the OpenDJ directory server, not to do offline processing.
OpenDJ has a tool called ldifmodify which allows to apply one or more operations (expressed in LDIF representation) against an existing LDIF file (representing the entries).
Please read the documentation of OpenDJ for the details and examples.
Regards,
Ludovic.

Related

How to keep generation number when copy file from gcs bucket to another bucket

I'm using gcs bucket for wordpress (wp-stateless plugin)
after create and upload media file to a bucket. I copy it to other bucket (duplicate). But generation number of each object has been change (maybe random).
My question is: How to keep generation number same bucket source like in destination bucket?
Thanks in advance.
Basically, there’s not an official way of keeping the same version and generation numbers when copying files from one bucket to another. This is WAI and intuitive because the version number refers to this object (which resides on this bucket), when you copy it to another bucket, it's not the same object (it's a copy) so it cannot keep the same version number.
I could think of a workaround, keeping somewhere your own version of the objects and then through the API make an organized copy. This would mean you would be dumping the bucket but you would need to have a list of all the objects and its versions and then add them in sequential order (sounds like a lot of work). You could keep your own versioning (or the same versioning) in the metadata of each object.
I would recommend that if your application depends on the object’s versioning, to use custom metadata. Basically, if you did your own versioning using custom metadata, when copying the objects to a new bucket, it would keep the same metadata.
There is already a feature request created about this. But, it has mentioned that it's currently infeasible.
However, you can raise a new feature request here

Making sqlite3_open() fail if the file already exists

I'm developing an application that uses SQLite for its data files. I'm just linking in the SQLite amalgamation source, using it directly.
If the user chooses to create a new file, I check to see if the file already exists, ask the user if they want to overwrite the file, and delete it if they say yes. Then I call sqlite3_open_v2() with flags set to SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE to create and open the new data file.
Which is fine, except, what happens if a malicious user recreates the file I'm trying to open in between the file being deleted and SQLite opening it? As far as I'm aware, SQLite will just open the existing file.
My program doesn't involve passwords or any kind of security function whatsoever. It's a pretty simple app, all things considered. However, I've read plenty of stories where someone uses a simple app with an obscure bug in it to bypass the security of some system.
So, bottom line, is there a way to make sqlite3_open() fail if the file already exists?
You might be able to patch in support for the O_EXCL option flag of open(2). If you are using SQLite on a platform that supports that.

file location identification in informatica

I have few file names and need to identify which mapping/workflow is generating those files? Is it possible to check this in repository or at UNIX level. your advise may help me.
If you have access to the PowerCenter repository database, you can get information about file connections associated with sessions (e.g. source or target files) from the Metadata Exchange (MX) views:
REP_SESSION_FILES contains file connections associated with reusable sessions
REP_SESSION_INST_FILES contains file connection information for session instances associated with workflows
Source: PowerCenter 8.6.1. Repository Guide (login required)
Depending upon how you have named your objects, it may be possible to identify those files against particular mappings.
For example, Informatica will generate several Cache files in Cache directory. If you are using cached Lookup, depending upon the names of the lookup (or the name that you have used in named cache) you may identify which lookup has created that file. This approach is also applicable with Aggregator cache or Rank cache.
It would be easier if you can maintain a offline mapping between the transformations names and which mapping contains those transformations
Informatica Support team has come up with a tool called 'Metaquery' which could be used to get Metadata Information. This tool might give you the details you are looking for. This could be downloaded from Informatica Marketplace or their support site.

Gather data from drupal and export to CSV on schedule

We have a drupal site and we wish to export data from this site in the form of several CSV files. I'm aware of the Views module addins that make this a very simple process on demand, but what we're looking for is a way to automate this process through cron.
Most likely, we'll end up having to either write a standalone PHP file we can then access with cron to complete this action, or a custom module.
I first wanted to check to ensure that there isn't already a module or set of modules out there that will do what we're looking for. How would you accomplish this issue?
The end result is that these csv files will reside on the server for other services to pick up and import into their own systems or be distributed with rsync or something similar.
Best practices suggestions would also be appreciated!
if you want to do with cron,
Set up views with cvs data in them
Then add wget <path to your cvs view> or the path of a script which does everything you need, in your crontab.

need help in choosing the right tool

I have a client who has set-up a testing environment in some AI language. It basically runs some predefined test cases and stores the results in as log files (comma separated txt files). My job is to identify and suggest a reporting system and I have these options in mind. either
1. Importing the logs into MSSQL and use the reporting(SSRS) it uses
2. or us import the logs to MySQL and use PHP to develop custom reporting.
I am thinking that going with option2 is better. The reason for this is, the logs are inconsistent and contain unexpected wild characters that normally DB's don't accept. So, I can write some scripts in php before loading them to the database.
Can anyone please suggest if this is your problem what will you suggest to do?
It depends how fancy you need to be. If the data is in CSV files, you could even go so simple as to load it into Excel (or their favorite spreadsheet tool), and use spreadsheet macros to analyze it.

Resources