Is it possible to merge two OUs with a simple if/else to one OU? - openldap

Hello i need a advise / idea how i can solve the following problem:
I have two OUs, a primary and secondary OU, the primary OU would be filled up by a synchronisation process and in the secondary OU are entrys stored from user input via a website for temporary overwrites with a valid through timestamp.
In the client software its not possible to implement filter rules, so this must be implemented in openldap, that a search for a DN only give one result, when there is a entry in the secondary OU and the actually date is lower as the stored timestamp respond with the values from that entry. When there is no entry with that DN in the secondary OU but in the primary OU then respond the values from that entry.
Client Request for DN=ID1234
Check if DN=ID1234 in OU=secondary exists and validate current date with the validThrough Timestamp
If yes, response the values from DN=ID1234,OU=secondary
If not, check if DN=ID1234,OU=primary exists
If yes, response with this values
If not, response with nothing found
Here is a example:
DN=ID1234,OU=primary,dc=example,dc=local
> attributeX = "Blue"
DN=ID1234,OU=secondary,dc=example,dc=local
> attributeX = "Red"
> validThrough = 05.01.2023 (Human readable)
So a response before the 05.01.23 delivers "Red" and after that date i get "Blue"
My idea was something like a third merged OU where is every time the actuall entry.
I probed to achive that with the merge backend but actually i am stuck regarding i had no idea to get a current timestamp like NOW() in sql to create a filter like that
(&(dn=<DN>)(|(!(validThroughTimestamp<=<current timestamp>))(!(validThroughTimestamp=*))))
Did anyone can advise me how to solve that, is there any overlay or has a complete other idea to reach that?

Related

BizTalk routing messages in message box based on field value

I have an exercise I'm working to complete; previously it was de-batching multiple XML messages from one file into individual files. Then I had to route individual files based on a field value which had been promoted using filters on a port. Now the exercise has evolved into taking a multi record XML file, breaking it down to individual XML records, and routing their output to different folders based on a value in one of the fields. The hurdles are as follow:
I can't promote a repeating field such as the one I have to use to sort the outbound messages
The value of the field is a system.int32; I am sorting on a "equal to or more than 900" and "less than 900" so I need the int type.
Beyond simple "idNUm >= 900" I am in over my head with the necessary expression(s).
I have the basic orchestration design down, I am just lacking the expressions. The node I am looking to validate against is IDNum, and occurs in each record.
UPDATE: Still not working
I put in the following in my expression: IDNumDefined.Customer.IDNum >= 900
and I get "identifier Customer does not exist in "IDNumDefined"; are you missing an assembly reference?" and "unexpected token '>=' "
Ideas? (sorry about not updating question here)
The debatching has to occur using an Envelope and Body schema.
Once you have this figured out, the debatching can occur using a simple XML disassembler. In the body schema you can quick promote your idNum field by associating a PropertySchema with it.
Once this is taken care of, it is easy to use 2 send ports in order to set your filter subscription(s).

How can I be notified of an index being updated on DynamoDB?

I have a table (key=username, value=male or female) and an index on the values.
After I add an item to the table, I want to update the counts of males and females. However, after a successful write, as the index is a Global Secondary Index, the count query is not consistent.
Is there a way (dynamo db Streams, Lambda, ...) to monitor when the index is up to date?
Note that Im not looking for a solution that involves something else (keep count of increments in redis or ...), what I describe here is a simplified problem to especially ask a question about how can I monitor an index in dynamo.
Thanks!
I am not sure if there is any mechanism currently provided to check this but, you can easily solve this problem by adding a single line of code to your query.
ConsistentRead = True
DynamoDB has a parameter when set as true will make sure that you get latest updated value.
Now, when you add/update the item and then query the data add ConsistentRead option in it, this will ensure that you will have latest count value.
Here is the reference link.
If you are able to accomplish using other technique then please do share it.
Hope that helps.

How to list unique values of a particular field in Kibana

I am having a field named rpc in my elasticsearch database and I am displaying it using Kibana. When I search in search bar of kibana like:
rpc:*
It display all the values of rpc field but I want to have only those value to be displayed which are unique.
I have been playing around with Kibana4 since a couple of weeks now. I find it intuitive and simple and the experience has been great till now. Following your question, I tried getting unique results via a Data Table visualization. Why? Because I personally find it easier to understand. Following are the steps:
1. Get unique count
Create the visualization (Visualize -> Data Table). First lets get
the count of how many unique entries we have for a particular field
(We will use this in the later part for verification). I'm using
clientip.raw but as I see, it will work just fine with any friendly
field name too.
2. Set the aggregation right
Set you aggregation back to count and have a Split Rows as follows. Not doing this will give you count 1 for each field value (since it is looking for unique counts) when you populate the table. Noteworthy part is setting the Top field to 0. Because Kibana won't let you enter anything else than a digit (Obviously!). This was the tricky part. Hit Apply and you'll get the results. Unique field values and the count of each of them.
3. Verification:
Going to the last page of the table, we see there are exactly 543 results. This is how I know it works.
What Next?
You save this visualization and add it to a Dashboard. There you can always check the request, query, response and other stats.
Just an addition to the above mathakoot answer.
For the user of newer version (which do not allow bucket size of 0 anymore) just set a value greater than the maximum number of result
And report the value in the Options>Per Page field
I am using Kibana 6 so the UI looks a bit different than the older answers here.
Here is what worked for me
Create a visualization from your query, I used a line graph type (don't think it matters)
Under Data, set metrics aggregation = "Unique Count" and set field to your field.
Set x-axis aggregation = "Terms" and set field to your field.
Set Size > your number of records
Under Metrics and Axes, disable drawing of the graph, circles, and labels (this really helps the UI not lag)
Run query and then click "Inspect" and download CSV
Data
Metrics & Axes
I wanted to achieve something similar but I'm stuck with Kibana 3.1.
I simply added a panel of type "TERMS" and configured its Field = User-agent and left everything else on default values. This gave me a nice bar chart with one bar for each User-agent.

Saving data in table in SQL Server 2005

I am asp.net developer and I work with SQL Server 2005 .
I have a table with 4 columns
say
Name
RollNo
Std
Div
if client enters
Name
RollNo
Std
but doesn't enters 4.Div column data and try's to save data, it should not give error, it should save the data in database
So it is giving you an error? Make sure that you set 4. Div column to "Allow Nulls"(No value).
It's also good to know that SQL Server can be set up to insert a default value if one isn't provided.
HI,
That means that you need to save with either default division for standard or null value.
You can setup in database to allow null and do not pass anything and that saves the data with null values.
Otherwise, I think you should put default value in Div for that standard. So whenever you dont pass that value in Insert statement, it takes default one.
Will it help or do I misunderstood it?
If I understand your problem, you're going to have NULL in one of your fields (provided you allow nulls in the fields). Depending on where you're experiencing your problem, your code would need to test for NULL from the information a client inputs or your code will need to test for NULL when it retrieves the information.
If you're running into an error, it might be prudent to post details of the error as well.

"User Preferences" Database Table Design

I'm looking to create a table for user preferences and can't figure the best way to do it. The way that the ASP.NET does it by default seems extremely awkward, and would like to avoid that. Currently, I'm using one row per user, where I have a different column for each user preference (not normalized, I know).
So, the other idea that I had come up with was to split the Preferences themselves up into their own table, and then have a row PER preference PER user in a user preferences table; however, this would mean each preference would need to be the exact same datatype, which also doesn't sound too appealing to me.
So, my question is: What is the best/most logical way to design a database to hold user preference values?
Some of the ideas that I try to avoid in database work, is data duplication and unnecessary complication. You also want to avoid "insert, update, and deletion anomalies". Having said that, storing user preferences in one table with each row = one user and the columns, the different preferences that are available, makes sense.
Now if you can see these preferences being used in any other form or fashion in your database, like multiple objects (not just users) using the same preferences, then you'll want to go down your second route and reference the preferences with FK/PK pairs.
As for what you've described I see no reason why the first route won't work.
I usually do this:
Users table (user_id, .... etc.)
.
Options table (option_id, data_type, ... etc.)
(list of things that can be set by user)
.
Preferences table (user_id, option_id, setting)
I use the new SQLVARIANT data type for the setting field so it can be different data types and record the data type of the option as part of the option definition in the Options table for casting it back to the right type when queried.
If you store all your user preferences in a single row of a User table you will have a maintenance nightmare!
Use one row per preference, per user and store the preference value as a varchar (length 255 say, or some value large enough to meet your requirements). You will have to convert values in/out of this column obviously.
The only situation where this won't work easily is if you want to store some large binary data as a User preference, but I have not found that to be a common requirement.
Real quick, one method:
User(UserID, UserName, ...)
PreferenceDataType(PreferenceDataTypeID, PreferenceDataTypeName)
PreferenceDataValue(PreferenceDataValueID, PreferenceDataTypeID, IntValue, VarcharValue, BitValue, ...)
Preference(PreferenceID, PreferenceDataTypeID, PreferenceName, ...)
UserHasPreference(UserID, PreferenceID, PreferenceDataValueID)

Resources