Open LDAP Auditing not showing timestamp when deleting members - openldap

We are preparing an audit script for LDAP and we have openldap.
The auditlog has timestamp when adding a new member or modifying a member.
The audit.ldif is as follows:
# add 144294514 dc=com,cn=admin IP=...
dn:..
modifyTimestamp: 20150922180548Z
# end add 1442945148
# modify 1442945124 dc=com,cn=admin IP=...
...
-
replace: modifyTimestamp
modifyTimestamp: 20150922180524Z
-
# end modify 1442945124
# delete 1442945148 dc=com,cn=admin IP=...
dn: ...
changetype: delete
# end delete 1442945148
Here we have timestamp for both add and modify. However, there is no timestamp for delete.
I couldn't find any useful information on how to enable timestamp for LDAP Auditing delete operation.
Is there a way to log the delete timestamp in the audit log?
The audit report is expected to show user actions on a daily basis and timestamp is mandatory.
Thanks,
Mathew Liju

there are timestamps in your log.
the "modifyTimestamp:" is NOT what you should searching for, this is just the attribute was is set in ldap, at the entry.
your auditlog uses UNIX timestamps: https://en.wikipedia.org/wiki/Unix_time
look at the
# delete TIMESTAMP dc=com,cn=admin IP=...
..
# end delete TIMESTAMP
that translates to (for my timezone)
Di 22. Sep 20:05:48 CEST 2015
on linux cmd use
date --date='#TIMESTAMP'
to translate that timestamps, or use an online converter

Related

ORACLE 11g Know Insert Record Details which Failed to insert

I have started auditing insert records by user on failure to any table in my oracle 11g Database. I have used following command to do the same.
AUDIT INSERT ANY TABLE BY SHENA BY ACCESS WHENEVER NOT SUCCESSFUL;
I would like to know whenever the record insert will fail, Can i know what was the records which failed to insert into table.
Where we can see such information. Or if you know any other way of auditing of the same please suggest. One way which i know is to write a trigger on insert. In that trigger handle insert failure EXCEPTION and save those values to some table.
Use SQL Loader Utility with following control file format.
options(skip=1,rows=65534,errors=65534,readsize=16777216,bindsize=16777216)
load data
infile 'c:\users\shena\desktop\1.txt'
badfile 'C:\Users\shena\Desktop\test.bad'
discardfile 'C:\Users\shena\Desktop\test.dsc'
log 'C:\Users\shena\Desktop\test.log'
append
into table ma_basic_bd
fields terminated by '|' optionally enclosed by '"' trailing nullcols
(fs_perm_sec_id,
"DATE" "to_date(:DATE,'YYYY-MM-DD')",
adjdate "to_date(:adjdate,'YYYY-MM-DD')",
currency,
p_price,
p_price_open,
p_price_high,
p_price_low,
p_volume)
You are requested to use the conventional path loading so that we can get the rejected(rejected because of datatype mismatch and business rule violation) records in .bad file. Conventional path loading is a default option.
Following URL can be used for the detailed knowledge.
https://youtu.be/eovTBGAc2RI
Total 4 videos are there. Very helpful.

Intershop: How To Delete a Channel After Orders Have Been Created

I am able to delete a channel from the back office UI and run the DeleteDomainReferences job in SMC to clear the reference and be able to create a new channel again with the same id.
However, once an order has been created, the above mentioned process won't work.
I heard that we can run some stored procedures against the database for situation like this.
Question: what are the stored procedures and steps to take to be able to clean any reference in Intershop so that I can create a channel with the same id again?
Update 9/26:
I did configure a new job in SMC to call DeleteDomainReferencesTransaction pipeline with ToBeRemovedDomainID attribute set to the domain id that I am trying to clean up.
The job ran without error in the log file. The job finished almost instantly, though.
Then I ran the DeleteDomainReferences job in SMC. This is the job I normally run after deleting a channel when there is no order in that channel. This job failed the following exception in the log file.
ORA-02292: integrity constraint (INTERSHOP.BASKETADDRESS_CO001) violated - child record found
ORA-06512: at "INTERSHOP.SP_DELETELINEITEMCTNRBYDOMAIN", line 226
ORA-06512: at line 1
Then I checked BASKETADDRESS table and did see the records for that domain id. This is, I guess, the reason why DeleteDomainReferences job failed.
I also execute the SP_BASKET_OBSERVER with that domain id, but it didn't seem to make a difference.
Is there something I am missing?
sp_deleteLineItemCtnrByDomain
-- Description : This procedure deletes basket/order related stuff.
-- Input : domainID The domain id of the domain to be deleted.
-- Output : none
-- Example : exec sp_deleteLineItemCtnrByDomain(domainid)
This stored procedure should delete the orders. Look up the domainid that you want to delete in the domaininformation table and call this procedure.
You can also call the pipeline DeleteDomainReferencesTransaction. Setup an smc job that calls this pipeline with the domainid that you want to clean up as a parameter. It also calls a second sp that cleans up the payment data so it actually a better approach.
Update 9/27
I tried this out on my local 7.7 environments. The DeleteDomainReferences job also removes the orders from the isorder table. No need to run sp_deleteLineItemCtnrByDomain separately. Recreating the channel I see no old orders. I'm guessing that you discovered a bug in the version you are running. Maybe related to the address table being split into different tables. Open a ticket for support to have them look at this.
With the assistance from intershop support, it has been determined that, in IS 7.8.1.4, the sp_deleteLineItemCtnrByDomain.sql has issue.
line 117 and 118 from 7.8.1.4
delete from staticaddress_av where ownerid in (select uuid from staticaddress where lineitemctnrid = i.uuid);
delete from staticaddress where lineitemctnrid = i.uuid;
should be replaced by
delete from basketaddress_av where ownerid in (select uuid from basketaddress where basketid = i.uuid);
delete from basketaddress where basketid = i.uuid;
After making the stored procedure update, running DeleteDomainReference job finishes without error and I was able to re-create the same channel again.
The fix will become available in 7.8.2 hotfix as I was told.

HashiCorp Vault project - write additional key/value pair without overwritting existing ones

When I put the first key/value pair to Vault:
vault write secret/item/33 item_name='item_name'
It works well and I get:
vault read secret/item/33
Key Value
--- -----
refresh_interval 768h0m0s
item_name item_name
But if I want put additional field item_type:
vault write secret/item/33 item_type='item_type'
It overwrites existing one:
vault read secret/item/33
Key Value
--- -----
refresh_interval 768h0m0s
item_type item_type
How to write additional field - key/value pair to Vault without replacing existing ones?
Vault with kv v2 engine has added this ability.
vault kv patch secret/item newkey=newvalue
You can only store one value per key. (Confirmed by Vault developer)
Either you think on a data structure that is suitable and write a long string to this key or you are using a single key for each value which could look as follows:
vault write secret/item/33/name item_name='item_name'
vault write secret/item/33/type item_type='item_type'
Vault doesn't allow you to append to an existing secret. It's actually really annoying. You first have to read the previous key/values and then write them back in at the same time that you're writing in the new key/values.
Here is a blog post I found where someone talks about that process: https://www.fritz.ninja/extending-vault-cli-with-some-ruby-love/
Essentially, he wrote his own command line tool that does the append for you automatically. He says he created the tool for his job, so he can't share the code, but he's started an open-source version on Github called Vaulty: https://github.com/playpasshq/vaulty
vault kv put secret/hello foo=world excited=yes
even with Key/value v1 you should be able to set multiple as long as you specify both in the same put command.
Late to the OP. Anyways, what I'm doing is create a JSON file with levels and sublevels of data (nested-objects, feature not available only typing plain KEY=VALUE inputs) and then load the file into KV Engine through # operator, like $ vault kv put -mount=secret foo #data.json, and of course handle file permissions and ownership properly. Whenever I need to rotate the values, just change the JSON file and then reupload it.
IMO it's even better than just type the values after the command because in that way it remains in bash history.

BizTalk 2013 file receive location trigger on non-event

I have a file receive location which is schedule to run at specific time of day. I need to trigger a alert or mail if receive location is unable to find any file at that location.
I know I can create custom components or I can use BizTalk 360 to do so. But I am looking for some out of box BizTalk feature.
BizTalk is not very good at triggering on non-events. Non-events are things that did not happen, but still represent a certain scenario.
What you could do is:
Insert the filename of any file triggering the receive location in a custom SQL table.
Once per day (scheduled task adapter or polling via stored procedure) you would trigger a query on the SQL table, which would only create a message in case no records were made that day.
Also think about cleanup: that approach will require you to delete any existing records.
Another options could be a scheduled task with a custom c# program which would create a file only if there were no input files, etc...
The sequential convoy solution should work, but I'd be concerned about a few things:
It might consume the good message when the other subscriber is down, which might cause you to miss what you'd normally consider a subscription failure
Long running orchestrations can be difficult to manage and maintain. It sounds like this one would be running all day/night.
I like Pieter's suggestion, but I'd expand on it a bit:
Create a table, something like this:
CREATE TABLE tFileEventNotify
(
ReceiveLocationName VARCHAR(255) NOT NULL primary key,
LastPickupDate DATETIME NOT NULL,
NextExpectedDate DATETIME NOT NULL,
NotificationSent bit null,
CONSTRAINT CK_FileEventNotify_Dates CHECK(NextExpectedDate > LastPickupDate)
);
You could also create a procedure for this, which should be called every time you receive a file on that location (from a custom pipeline or an orchestration), something like
CREATE PROCEDURE usp_Mrg_FileEventNotify
(
#rlocName varchar(255),
#LastPickupDate DATETIME,
#NextPickupDate DATETIME
)
AS
BEGIN
IF EXISTS(SELECT 1 FROM tFileEventNotify WHERE ReceiveLocationName = #rlocName)
BEGIN
UPDATE tFileEventNotify SET LastPickupDate = #LastPickupDate, NextPickupDate = #NextPickupDate WHERE ReceiveLocationName = #rlocName;
END
ELSE
BEGIN
INSERT tFileEventNotify (ReceiveLocationName, LastPickupDate, NextPickupDate) VALUES (#rlocName, #LastPickupDate, #NextPickupDate);
END
END
And then you could create a polling port that had the following Polling Data Available statement:
SELECT 1 FROM tFileEventNotify WHERE NextPickupDate < GETDATE() AND NotificationSent <> 1
And write up a procedure to produce a message from that table that you could then map to an email sent via SMTP port (or whatever other notification mechanism you want to use). You could even add columns to tFileEventNotify like EmailAddress or SubjectLine etc. You may want to add a field to the table to indicate whether a notification has already been sent or not, depending on how large you make the polling interval. If you want it sent every time you can ignore that part.
One option is to set up a BAM Alert to trigger if no file is received during the day.
Here's one mostly out of the box solution:
BizTalk Server: Detecting a Missing Message
Basically, it's an Orchestration that listens for any message from that Receive Port and resets a timer. If the timer expires, it can do something.

Grant privileges to users on MonetDB

I am trying to grant admin, i.e. "monetdb" privileges to users, without any success so far (the privileges should extend to all the tables of the sys schema).
As I have a schema with many tables would be very complex to give explicit rights (e.g. SELECT ) to all the users mentioning all the tables: I need to find an efficient way to do this.
I log into MonetDB with SQLWorkbenchJ using the superuser monetdb.
I have also tried to directly send queries using R and the MonetDB.R package (with no difference).
I create a user associated to a schema (e.g. the sys schema) as
CREATE USER "user1" WITH PASSWORD 'user1' NAME 'user one' SCHEMA "sys";
Then I try to GRANT "monetdb" privileges to user1
GRANT monetdb TO "user1";
And I do not get any error.
If I log into MonetDb as user1 and try a simple select (on a pre-existing table of the sys schema) I get:
SELECT * FROM sys.departmentfunctionindex
SELECT: access denied for user1 to table 'sys.departmentfunctionindex'
Clearly I'm missing something.
Any suggestion is welcome.
I think I get it now.
The following works, using SQLWorkbenchJ I log into MonetDB as monetdb (superuser).
I run:
CREATE USER "user20" WITH PASSWORD 'user20' NAME 'user 20' SCHEMA "sys";
CREATE SCHEMA "mschema" AUTHORIZATION "monetdb";
CREATE table "mschema"."mtestTable"(v1 int, v2 int);
INSERT INTO "mschema"."mtestTable" VALUES (4, 4);
GRANT monetdb to "user20"; -- this gives implicit superuser powers to user20 (but not for the "sys" schema)
Now I log out and login again as user "user20".
I run:
SET SCHEMA mschema; -- I was missing this before but it is essential
SET ROLE monetdb;
At this stage user20 has got all the authorisations of the superuser monetdb, e.g.:
SELECT * FROM "mschema"."mtestTable"; -- can use select
DROP TABLE "mschema"."mtestTable"; -- can use drop etc.
Thanks to #Hannes, #Ying & #Dimitar
Here is an working example for granting SELECT privileges to a separate user with MonetDB:
as admin (e.g. monetdbaccount)
CREATE SCHEMA "somedataschema";
CREATE TABLE "somedataschema"."somepersistenttable" (i INTEGER);
INSERT INTO "somedataschema"."somepersistenttable" VALUES (42);
CREATE USER "someuser" WITH PASSWORD 'someuserpass' NAME 'someusername' SCHEMA "sys";
GRANT SELECT ON "somedataschema"."somepersistenttable" TO "someuser";
CREATE SCHEMA "someuserschema" AUTHORIZATION "someuser";
ALTER USER "someuser" SET SCHEMA "someuserschema";
Now, someuser can SELECT from somepersistenttable, but not modify it. Should this user need a table on its own, someuserschema or temporary tables (deleted when user logs out) could be used. Hence, this works:
SELECT * FROM "somedataschema"."somepersistenttable";
CREATE TABLE "someuserschema"."sometemptable" (i integer);
INSERT INTO "someuserschema"."sometemptable" VALUES (84);
SELECT * FROM "sometemptable";
CREATE TEMPORARY TABLE "sometemptable" (i INTEGER) ON COMMIT PRESERVE ROWS;
INSERT INTO "sometemptable" VALUES (42);
SELECT * FROM "sometemptable";
And this will produce an insufficient privileges error (same for update, drop, alter etc.):
INSERT INTO "somedataschema"."somepersistenttable" VALUES (43);
Please have a look at this MonetDB bug report. Grant privilege works for user created schemas, but the bug-fix seems not cover the default "sys" schema.
No, you don't have to explicitly grant access to each individual tables. You can still use GRANT monetdb to "user1". See my previous answer, for now, you just need to create your tables under a user created schema. I just tried it in the "default" branch of MonetDB.

Resources