Is there an openldap core schema reference? - openldap

I am looking for the openldap common core schemas reference and although I have found this reference I have been warned with extreme caution (and insulted in some cases) not to rely on this source for accuracy. So I would greatly appreciate

I don't know why the insults. I've always found that source reliable. However the references for core.schema are the RFCs cited in the source code.

Related

Is possible to connect to Kafka with Robotframework

I want to connect and test Kafka.
Is it possible to test using RobotFramework.
Any exist library for this.
Thanks
Sarada
I realize this post is 5 years old, but the answer (at least now) is yes!
Repository
Keyword documentation
Sorry I don't have enough experience to provide any tutoring on it (I'm a newb with Kafka), but at least it exists.
Yes, it is possible to interact with and thus test kafka with RobotFramework. Highly unlikely there exists a library for it however, based on a brief web search.
See my earlier response to this old thread that gives a detailed discussion of what you're asking about and how to go about solving it, minus the actual implementation (which is a significant amount of work, hence no one has written and shared such a library yet).
https://groups.google.com/forum/#!topic/robotframework-users/gnfR12xAU4U

.dll decompiler

I have inherited a project from a previous developer. All the ASP .NET code behind files are contained within a .dll and the original files are unavailable. Is there any reliable decompiler out there that produces fairly readable code? I've heard mixed responses while browsing other forums - some say there are applications that will decompile .dll files, others say they just produce practically unusable assembly code. Thoughts?
Thanks!
You could try Red Gate's Reflector with the FileDisassembler plugin. Also your heritage seems a little strange. What did the previous developer do with the source code? Didn't he use source control, performed backups? The usability of the source code produced by disassembling a .NET assembly will depend on whether the person that wrote the source code obfuscated it when compiling.
You cannot produce maintainable, properly-designed source code by reverse-engineering a DLL. With clever hacking, you'll be able to alter it to do specific things. Typically, you'll find tutorials on how to bypass licensing/registration checks. (Some of them for stealing my software!)
But your effort will be probably 10x harder than if you had the source code. IMO, you're probably better off treating it as a black box, studying the inputs/outputs, behavior, design documents (if any), etc.. Reverse-engineer REQUIREMENTS from what you see the DLL doing and what the host application needs to do with it. And then get funding/approval for a new project to build a new one. With source code!
Otherwise, your "inheritance" is a liability that you should quickly distance yourself from.
Edit: I missed the part about .net. I don't know much about that, your mileage may be better than a native binary.
--
Chris
I use jetbrains dotpeek
Sometimes you just have to decompile

Is ScriptReference order always conserved at runtime?

I've been bitten in the past by Page.RegisterClientScriptBlock-registered JS not being emitted in a stable order from machine to machine in the bad old .Net 1.1 days. Now, I'm writing a set of user controls that use <asp:ScriptManager/> to reference JS, and although I haven't had any problems so far - order always seems to be conserved between <asp:ScriptReference> tags - I'm feeling a bit shy about it. MSDN seems silent on the topic, and various bloggers seem to indicate ordering is stable in .Net 2.0+, but I haven't found any definitive reference.
Does one exist? Is the order of inclusion of scripts I observe on my development machine guaranteed to be the one I'll see in all other contexts the webapp runs?
Further analysis concludes that the answer is that they are, at least in my particular production and development environments.

Why is Peer-to-Peer programming a hard topic to obtain good research for?

After reading a bit more about how Gnutella and other P2P networks function, I wanted to start my own peer-to-peer system. I went in thinking that I would find plenty of tutorials and language-agnostic guidelines which could be applied, however I was met with a vague simplistic overview.
I could only find very small, precise P2P code which didn't do much more than use client/server architecture on all users, which wasn't really what I was looking for. I wanted something like Gnutella, but there doesn't seem to be any articles out in the open for joining the network.
RFC 4981, with its huge bibliography, could be a very good starting point.
I had to write a basic Gnutella client in C# using Web Services and I think the class notes on the P2P stuff are still available here and here.
You might have better success researching Bittorrent, I believe that the creator has written some papers, and it seems others are as well.
BitTyrant
Bittorent.org, see the developers section
I don't know what platform you are trying to use, but here is a decent article on the subject for .NET.
I've found the TheoryOrg Unofficial BitTorrent Specification to be the best online source for Bittorrent information. Also, the Monotorrent code is fairly simple and easy to understand. There's also a project called "GCT" which implements JGroups style P2P for LAN/Multicast environments, and its code is similarly easy to understand (if a bit buggy).
You can try to read Gnutella2 and try to implement messaging. For reading conceptual material you can read Distributed Systems by Andrew Tannenbaum.
You can have a look at JXTA. It's intention was to be a generic, platform agnostic p2p framework, in contrast to other p2p implementations which are usually for a very specific purpose (such as Gnutella).
Don't be fooled by it's Java appearance, there are binding available for C/C++/C#, but the core protocols are implemented in XML which should translate to any language.
You can also download a free book here.

When should one use a project reference opposed to a binary reference?

My company has a common code library which consists of many class libary projects along with supporting test projects. Each class library project outputs a single binary, e.g. Company.Common.Serialization.dll. Since we own the compiled, tested binaries as well as the source code, there's debate as to whether our consuming applications should use binary or project references.
Some arguments in favor of project references:
Project references would allow users to debug and view all solution code without the overhead of loading additional projects/solutions.
Project references would assist in keeping up with common component changes committed to the source control system as changes would be easily identifiable without the active solution.
Some arguments in favor of binary references:
Binary references would simplify solutions and make for faster solution loading times.
Binary references would allow developers to focus on new code rather than potentially being distracted by code which is already baked and proven stable.
Binary references would force us to appropriately dogfood our stuff as we would be using the common library just as those outside of our organization would be required to do.
Since a binary reference can't be debugged (stepped into), one would be forced to replicate and fix issues by extending the existing test projects rather than testing and fixing within the context of the consuming application alone.
Binary references will ensure that concurrent development on the class library project will have no impact on the consuming application as a stable version of the binary will be referenced rather than an influx version. It would be the decision of the project lead whether or not to incorporate a newer release of the component if necessary.
What is your policy/preference when it comes to using project or binary references?
It sounds to me as though you've covered all the major points. We've had a similar discussion at work recently and we're not quite decided yet.
However, one thing we've looked into is to reference the binary files, to gain all the advantages you note, but have the binaries built by a common build system where the source code is in a common location, accessible from all developer machines (at least if they're sitting on the network at work), so that any debugging can in fact dive into library code, if necessary.
However, on the same note, we've also tagged a lot of the base classes with appropriate attributes in order to make the debugger skip them completely, because any debugging you do in your own classes (at the level you're developing) would only be vastly outsized by code from the base libraries. This way when you hit the Step Into debugging shortcut key on a library class, you resurface into the next piece of code at your current level, instead of having to wade through tons of library code.
Basically, I definitely vote up (in SO terms) your comments about keeping proven library code out of sight for the normal developer.
Also, if I load the global solution file, that contains all the projects and basically, just everything, ReSharper 4 seems to have some kind of coronary problem, as Visual Studio practically comes to a stand-still.
In my opinion the greatest problem with using project references is that it does not provide consumers with a common baseline for their development. I am assuming that the libraries are changing. If that's the case, building them and ensuring that they are versioned will give you an easily reproducible environment.
Not doing this will mean that your code will mysteriously break when the referenced project changes. But only on some machines.
I tend to treat common libraries like this as 3rd-party resources. This allows the library to have it's own build processes, QA testing, etc. When QA (or whomever) "blesses" a release of the library, it's copied to a central location available to all developers. It's then up to each project to decide which version of the library to consume by copying the binaries to a project folder and using binary references in the projects.
One thing that is important is to create debug symbol (pdb) files with each build of the library and make those available as well. The other option is to actually create a local symbol store on your network and have each developer add that symbol store to their VS configuration. This would allow you to debug through the code and still have the benefits of usinng binary references.
As for the benefits you mention for project references, I don't agree with your second point. To me, it's important that the consuming projects explicitly know which version of the common library they are consuming and for them to take a deliberate step to upgrade that version. This is the best way to guarantee that you don't accidentally pick up changes to the library that haven't been completed or tested.
when you don't want it in your solution, or have potential to split your solution, send all library output to a common, bin directory and reference there.
I have done this in order to allow developers to open a tight solution that only has the Domain, tests and Web projects. Our win services, and silverlight stuff, and web control libraries are in seperate solutions that include the projects you need when looking at those, but nant can build it all.
I believe your question is actually about when projects go together in the same solution; the reason being that projects in the same solution should have project references to each other, and projects in different solutions should have binary references to each other.
I tend to think solutions should contain projects that are developed closely together. Such as your API assemblies and your implementations of those APIs.
Closeness is relative, however. A designer for an application, by definition, is closely related to the app, however you wouldn't want to have the designer and the application within the same solution (if they are at all complex, that is). You'd probably want to develop the designer against a branch of the program that is merged at intervals further spaced apart than the normal daily integration.
I think that if the project is not part of the solution, you shouldn't include it there... but that's just my opinion
I separate it by concept in short

Resources