Translucent-Overlays configuration - openldap

is anyone familiar on how setting translucent overlay for openldap 2.4.40.
I searched the internet without any hope.
what I want to implement is two openldap server so that one server get the search information from the other one, override some information based on its database and then give the final attributes

His question would be "How do you start". I've also read the "documentation"; it's terrible on this subject.
The slapo-translucent man page has no effective information other than "this is the translucent overlay, you can enable it." There's nothing about how you configure it to point to the remote ldap server. There's very little information
on how you can determine what cn/dn/du/o/fu that you desire to add/modify about the remote search results. (I just want to add to a user's group membership and
there isn't an example about something as simple as that.)
Everything regarding OpenLDAP 2.4 says you should be using ldapadd/modify to change the slapd dynamic configuration in /etc/ldap/slapd.d and yet ALL examples/tutorials for translucent overlays reference outdated slapd.conf usage.
Basically, none of the documentation is in any way educational unless you are already a full wizard at administering OpenLDAP.
Add to that the community documentation comes from a wide flavor of unix distributions, none of which conform to each other, and it just maximizes
the confusion.
My interaction with OpenLDAP leaves me with the impression that it has, easily, the most confusing configuration and usage architecture of any service that I have ever seen.
A directory service is something that an admin should be able install and standup in a day with no prior experience. It's clearly going to take weeks of
time trying to untie the configuration knot that this requires.

Related

Why is Google AppMaker is not allowing me to add database connections?

If I attempt to add a new SQL data model, a popup shows up explaining that the feature is locked and I must contact my administrator (myself). However, there is no further explanation found in the documentation on how to unlock this feature.
This is how AppMaker used to look back when it allowed me to add database connections:
This is how AppMaker looks now when starting a new app:
The App Maker engineers are doing crazy changes now and then. I believe their point of view is to make the platform better but this kind of things really annoys me and makes life harder, honestly.
I ran across this problem and find out that they are forcing admins to set up a default instance in the G Suite Admin console. You can read more about that here. You haven't completed that step and that is why you see what you are seeing. Although, it's crazy! What if I don't wanna do that?! But they are the product owners and they establish the rules so we have to suck it up and do what they want! Unless a bunch of people complain about it, they are not changing the behavior.
Fortunately, I was able to find a work around. So what we (you and me) are trying to do, is set up a custom sql database. Right now, that will only be available if you've already done what I described above. So the work around is to import an app that already has a custom sql database set up and then modify the Google Cloud SQL address. Look at the example below:
Here is the demo workaround app that I use. Download it to your machine, then import it as in the image above. I hope this helps!

Migration help Websphere BPM 8.0.1.3 to 8.5.6 (Redirection rule)

We are migrating from WebSphere BPM 8.0.1.3 to 8.5.6, our plan is to move application by application rather than in a big-bang. The idea would be that when we move an application to the new server, we would create an IHS rule which redirects the related URLs to the new server. That would mean that we keep some applications running on the old server while some are already migrated to the new one.
Is this possible to achieve? Or any other idea alternate to re writing IHS rules? Like make use of WebServer plugins?
Unfortunately, I don't think that your current approach is going to work well for you. I've outlined the various options for IBM BPM upgrades here. I see several major problems with your approach, all of which come down to the fact that many of the URLs used by IBM BPM contain no details about the context for the request.
The first issue I see IBM uses a portal for a given user's work. That is all their tasks across the various BPM solutions will appear in the same web UI. This URL is not different across the Process Applications in the install. This means that all your users are trying to get their task list by going to a url like - https://mybpmserver/portal. There isn't a way to understand the process app a given user may be working with in this context, so you don't know who to redirect to the new server.
The second issue is that users are able to work with multiple process apps, so even if the context was known in the above url, you would enter complexities with respect to users working in 2 different process apps unless both have been migrated.
The third issue is that BPM is essentially a state engine. IBM does not supply a way to "migrate" that state from an old install to a new install on a per Process App (PA) basis, you have to migrate all or none. Assuming "none" because it feels like you want to follow the drain approach in my article, then you have the problem that the URLs for executing a task do not have the PA context and therefore you won't know which server to direct which task to. That is for a given PA you will have tasks on both the old server which existed before the upgrade, and the new server which were created after the upgrade, but the URLs for these tasks will look essentially the same.
There are additional issues, but the main one comes down to properly understanding how the run time BPM engines work. Some of the above issues may be mitigated if you have a separate UI layer for presenting the tasks the users (my company make a portal replacement that can do this) which would permit it to understand the context of the tasks, but if you have this, then you can get the correct behavior in that code and not worry about WAS configuration settings.
You could use the plugin-cfg.xml merge tool on the two generated plugin-cfg.xml's. That way the WAS Plugin would always know which server had which applications.

Licensing system for Flex Application

Does anyone have experience in creating a licensing system for a Flex Application. My Flex application is a swf which is embedded in a aspx page. For data retrieval it accesses web services.
My intention is to sell it to my customers on license base. (Eg: Using a Serial Number).
I think I've answered this a lot; but I can't find it. Here are two services:
NitroLM
Sharify
There used to be another, I think, but I can't remember the name and my Google-Fu is failing me.
You didn't go into details on how you want your licensing scheme to work; so it's tough to give you specifics. The algorithm is not complicated, but the details can become very much so.
I built something for Flextras in the early days that would watermark non licensed Flex components. The gist is:
Load serial Number.
Perform some secret sauce algorithm! In the Flextras model this
was done at runtime based on a serial number specified at compile
time. It did some checking of the serial number data against the
domain, the component, and the component's version. For an
application you may want to check other data (The Username for
example) and possibly against some central repository server.
If the user is allowed access? If so then let them have at it.
If not; then show them the "invalid" message/screen/etc..

Best Practices for Self Updating Desktop Application in a network environment

I have searched through google and SO for possible answers to this question, but can only find small bits of information scattered around the place, most of which appear to be personal opinion.
I'm aware that this question could be considered subjective, but I'm not looking for personal opinion, rather facts with reasons (e.g. past experience) or even a single link to a blog/wiki which describes best practices for this (this is what I'd prefer to be honest). What I'm not looking for is how to make this work, I know how to create a self updating desktop application.
I want to know about the best practices for creating a self updating desktop application. The sort of best practices I'm especially curious about are:
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
How often should you check for updates? Weekly/daily/hourly and exactly why?
Should the update be visible to the user or run behind the scenes from a UI point of view?
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
Should you allow users to update from a central location or only allow updating through the specified application? (for closed business applications).
Surely there is some written rules/suggestions about this stuff? One of the most annoying things about a lot of applications is the updating, as it's hard to find a good balance between "out of date" and "in the users face".
If it helps consider this to be written in .net C# for a single client, running on machines with constant available connectivity to the update server, all of these machines talk to each other through the application, and all also talk to a central database server.
One best practice that many software overlook: ask to update when the user is closing your application, NOT when it has just launched it.
It's incredible how many apps don't do that (Firefox, for example). You just ran the app, you want to use it now, and instead, it prompts you if you want to update, which of course is going to take 5 minutes and require restarting the app.
This is non-sense. Just do the update at the end.
It's hard to give a general answer. It depends on the context: criticality of the update, what kind of app is it, user preferences, #users, network width, etc. Here are some of the options/trade-offs.
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
As a developer your best interest is to have all apps out there to be as up to date as possible. This reduces your maintenance effort. Thus, if the user does not mind you should update.
How often should you check for updates? Weekly/daily/hourly and exactly why?
If the updates are transparent to the user, do not require an immediate restart of the app, then I'd suggest that you do it as often as your the communication bandwidth allows (considering both the update check-frequent but small-and the download-infrequent but large)
Should the update be visible to the user or run behind the scenes from a UI point of view?
Depends on the user preferences but also on the type of the update: bug fixes vs. functionality/UI changes (the user will be puzzled to see the look and feel has changed with no previous alert)
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
same arguments as the previous question
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
if app size is small download it from scratch. This will prevent all sort of weird bugs created to mismatch between the different patches ("DLL hell"). However, this may require large download times or impose heavy toll on your network.
Should you allow users to update from a central location or only allow updating through the specified application? (for closed business applications).
I think both
From practical experience, don't forget to add functionality for updating the update engine. Which means that performing an update is usually a two step approach
Check if there are updates to the update engine
Check if there are updates to the actual application
Do you force an update if the clients
software is out of date, but not going
to break when trying to communicate
with other version of the software or
the database itself? If so how do you
signify this breaking change?
A common practice is to have a "ProtocolVersion" method which indicates the lowest/oldest version allowed.
The "ProtocolVersion" can either supplied by the client or the server depending on the trust level you have between the client and the server. In a low trust level it is probably better to have the client provide the "ProtocolVersion" and then deny access server side until the client is updated. In a "high trust level" scenario it will be easier to have the server supply the "ProtocolVersion" it accepts, and then all the logic for adapting to this - including updating the client application - implemented in the client only. Giving the benefit that the version check/handling code only needs to be in one place.
Do not ever try to force an update unless your lawyers demand that. Show the the user a update notification she can either accept or ignore. Try not to spam the same version too much is she rejected it. The help her make the decision, include a link to release notes or a short summary of changes.
Weekly would be a good default update check interval but let the user choose this, including completely disabling update check from the web. Do not check too often because she might be on an expensive mobile data plan, or she just doesn't like the idea of an application phoning home.
The update check part should be completely silent. If an update was found, display a notification for the user. During download and installation, show a progress bar.
To keep this simple, notify the user about any newer version. If you do not want to annoy them with frequent updates including just a few minor bug fixes, do not release every minor version at the download location watched by the update checker
Maintaining patches for all previously released versions is too much work. If the download size becomes a problem, figure out some other way than patches to make it smaller (7-zip compressed self-extracting exe, splitting the application to multiple MSI packages that have independent versions etc)
Two more things:
Do not implement the update engine as a process that is constantly running in the background even when I'm not using your application. My PC already ~10 such processes hogging resources, which is very annoying.
When updating the update engine itself, on one hand you need to have the engine running to show the installation progress UI but on the other hand the update process must be closed to avoid the reboot that would result from the exe file being locked. There are a number of things like running a helper program from %TEMP%, using Windows Installer restart manager, renaming the updater exe file before starting the installation package etc. Keep this in mind when architecting the update engine.
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
Ask the user.
How often should you check for updates? Weekly/daily/hourly and exactly why?
Ask the user.
Should the update be visible to the user or run behind the scenes from a UI point of view?
Ask the user.
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
Ask the user (notice a trend here?).
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
Typically, patch, if the application is of any significant size.
As far as the "ask the user" responses go, it doesn't mean always prompt them every single time. Instead, give them the option to set what they should be prompted for and what should just be done invisibly (and the first time a given thing occurs, ask them what should be done in the future, and remember that). This shouldn't be very difficult and you gain a lot of goodwill from a larger portion of your user base, since it's very hard to have fixed settings suit the desires of everyone who uses your app. When in doubt, more options are better than less - especially when they're the kind of option that's fairly trivial to code.

Is it commonplace/appropriate for third party components to make undocumented use of the filesystem?

I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.

Resources