Drupal multiple features and unreusable components - drupal

This is a general question on how Features should be created when we have components that could potentially be used in multiple features.
Suppose a context (c1) takes a view (v1) and is added to a feature (f1). Now let's say v1 has potential use somewhere else and i want to create a new feature (f2) to include it in...but the Features module doesn't give you the option to do so. The only thing i can do is cloning v1 into v2 and use that in f2.
I may be shortsighted, but I think that if f2 needed v1, then just add the other components that would make up f2 and put them in f1...because more likely than not, those two features would be closely related (see One big feature...), at least enough to warrant using the same view (which in turn could include the same node types, roles, etc...)
I guess i'm just curious, has there been a case where you've created a feature with a view, and then needed to have a different feature using that same view?

The easiest solution in your case is to add a dependency from f2 to f1 to make sure that you have your v1 available.
You can't add v1 in f2 a second because your view would be defined twice and features would create a circular dependency.
Otherwise, to create reusable components, you should clone your view to make sure that you have "everything" you need in one same feature.

Related

Generating files to multiple paths with Swagger Codegen?

I'm creating a template for our server-side codegen implementation, but I ran into an issue for a feature request...
The developers who are going to use the generated base want the following pattern (the generator is based on the dotnetcore):
Controllers
v{apiVersion}
{endpoint}ApiController : Controller, I{endpoint}Api
Interfaces
v{apiVersion}
I{endpoint}Api
I{endpoint}DataProvider
DataProviders
-v{apiVersion}
-{endpoint}DataProvider : I{endpoint}DataProvider
Both interfaces are the same, describing the endpoints. The DataProvider implementation will allow us to use DI to hot-swap the actual data provider/business logic layer during runtime.
The generated ApiControllers will refer to the IDataProviders, and use the actual implementation (the currently active one, that is). For that we're going to use dotnetcore's built-in dependency injection system.
However I can't seem to find a way to have the operations generator output to three different folders, based on the template. It will all end up jumbled in a single folder, and I will need to manually move them.
Is there a way to solve these requirements, or should I solve it all the time manually?

How to handle changes in objects' structure in automated testing?

I’m curious to know how feasible it is to get away from the dependency onto the application’s internal structure when you create an automated test case. Or you may need to rewrite the test case when a developer modifies a part of the code for a bug fix, etc.
We could write several automated test cases based on the applications internal object structure, but lets assume that the object hierarchy changes after 6 months or so, how do we approach these kind of issues?
I can't speak for other testing tools but at least in QTP's case the testing tool introduces a level of abstraction over the application so that non-functional changes in the application often (but not always) have no effect on the way the testing tool identifies the object.
For example in QTP all web elements are considered to be direct children of the document so that changes in the DOM (such as additional tables) don't change the object's description.
In TestComplete, there are a couple of ways to make sure that the changed app structure does not break you tests.
You can set up the Aliases tree of the Name Mapping feature. In this case, if the app structure is changed, you need to modify the Aliases tree appropriately and your test will stay working without requirement to modify them.
You can use the Extended Find feature of the Name Mapping in order to ignore parts of the the actual object tree and search for a needed objects on deeper levels.
This is what I was forced to do after losing all my work twice due to changes on the DOM structure:
Every single time I need to work with an object, I use the Find function with the ID of the object, searching for the object on the Page object. This way, whenever the DOM gets updated, my tests still run smoothly.
The only thing that will break my tests is if the object's ID get changed, but that's not very probable to happen.
Here you can find some examples of the helper functions I use.

Drupal 6 - make a module for every block of dynamic information?

I have a Drupal 6 website with about 20 pages. Inside every page, I need to create a lot of widgets with information either stored inside the database or from external web services. Most of the time, a "view" (from the view module) is just not enough to solve the requirement.
Up until now, any time I need such a widget, I create a new module which implements hook_block. Then, I drag and drop this new module inside the panel I want. I will need to create about 20 modules. This works pretty good. However, I'm not sure if this is the correct-drupal-strategy and I would love to receive some feedback from experienced Drupal developers.
A module can expose as many blocks as you want (in theory, admin/build/blocks will teach you otherwise ;)).
Have a look at the documentation of hook_block(), you just need to extend yours to return multiple block infos and then decided which one to show based on the $delta.
So you don't need 20 separate modules, maybe 2-3 and group the blocks somehow together because just a single module might be hard to maintain. The thing is that every single module makes your site a tiny bit slower (at least one more file to load, module_implements() needs to loop over every module for every hook and so on).
Without more information , it's hard to give any better advice. Maybe you could expose your data to views, or write a views plugin to display it in the way you want it, or...
Although Berdir's answer is pretty good, I'm impressed there's no link to any documentation in it. hook_block is meant for several blocks, and they can share functions that build their content. The API page is good, the example it gives defines two blocks at once.
You should notice each defined block has a delta (a key in the $blocks array). You can have dynamic deltas and use values in it to fetch data (passing a nid or uid and getting related content, for example).

ASP.NET plugin architecture: reference to other modules

We're currently migrating our ASP Intranet to .NET and we started to develop this Intranet in one ASP.NET website. This, however, raised some problems regarding Visual Studio (performance, compile-time, ...).
Because our Intranet basically exists of modules, we want to seperate our project in subprojects in Visual Studio (each module is a subproject).
This raises also some problems because the modules have references to each other.
Module X uses Module Y and vice versa... (circular dependencies).
What's the best way to develop such an Intranet?
I'll will give an example because it's difficult to explain.
We have a module to maintain our employees. Each employee has different documents (a contract, documents created by the employee, ...).
All documents inside our Intranet our maintained by a document module.
The employee-module needs to reference the document-module.
What if in the future I need to reference the employee-module in the document-module?
What's the best way to solve this?
It sounds to me like you have two problems.
First you need to break the business orientated functionality of the system down into cohesive parts; in terms of Object Orientated design there's a few principles which you should be using to guide your thinking:
Common Reuse Principle
Common Closure Principle
The idea is that things which are closely related, to the extent that 'if one needs to be changed, they all are likely to need to be changed'.
Single Responsibility Principle
Don't try to have a component do to much.
I think you also need to look at you dependency structure more closely - as soon as you start getting circular references it's probably a sign that you haven't broken the various "things" apart correctly. Maybe you need to understand the problem domain more? It's a common problem - well, not so much a problem as simply a part of designing complex systems.
Once you get this sorted out it will make the second part much easier: system architecture and design.
Luckily there's already a lot of existing material on plugins, try searching by tag, e.g:
https://stackoverflow.com/questions/tagged/plugins+.net
https://stackoverflow.com/questions/tagged/plugins+architecture
Edit:
Assets is defined in a different module than employees. But the Assets-class defines a property 'AssignedTo' which is of the type 'Employee'. I've been breaking my head how to disconnect these two
There two parts to this, and you might want to look at using both:
Using a Common Layer containing simple data structures that all parts of the system can share.
Using Interfaces.
Common Layer / POCO's
POCO stands for "Plain Old CLR Objects", the idea is that POCO's are a simple data structures that you can use for exchanging information between layers - or in your case between modules that need to remain loosely Coupled. POCO's don't contain any business logic. Treat them like you'd treat the String or DateTime types.
So rather than referencing each other, the Asset and Employee classes reference the POCO's.
The idea is to define these in a common assembly that the rest of your application / modules can reference. The assembly which defines these needs to be devoid of unwanted dependencies - which should be easy enough.
Interfaces
This is pretty much the same, but instead of referring to a concrete object (like a POCO) you refer to an interface. These interfaces would be defined in a similar fashion to the POCO's described above (common assembly, no dependencies).
You'd then use a Factory to go and load up the concrete object at runtime. This is basically Dependency Inversion.
So rather than referencing each other, the Asset and Employee classes reference the interfaces, and concrete implementations are instantiated at runtime.
This article might be of assistance for both of the options above: An Introduction to Dependency Inversion
Edit:
I've got the following method GetAsset( int assetID ); In this method, the property asset.AssignedTo (type IAssignable) is filled in. How can I assign this properly?
This depends on where the logic sits, and how you want to architect things.
If you have a Business Logic (BL) Layer - which is mainly a comprehensive Domain Model (DM) (of which both Asset and Employee were members), then it's likely Assets and Members would know about each other, and when you did a call to populate the Asset you'd probably get the appropriate Employee data as well. In this case the BL / DM is asking for the data - not isolated Asset and Member classes.
In this case your "modules" would be another layer that was built on top of the BL / DM described above.
I variation on this is that inside GetAsset() you only get asset data, and atsome point after that you get the employee data separately. No matter how loosely you couple things there is going to have to be some point at which you define the connection between Asset and Employee, even if it's just in data.
This suggests some sort of Register Pattern, a place where "connections" are defined, and anytime you deal with a type which is 'IAssignable' you know you need to check the register for any possible assignments.
I would look into creating interfaces for your plug-ins that way you will be able to add new modules, and as long as they follow the interface specifications your projects will be able to call them without explicitly knowing anything about them.
We use this to create plug-ins for our application. Each plugin in encapsulated in user control that implements a specific interface, then we add new modules whenever we want, and because they are user controls we can store the path to the control in the database, and use load control to load them, and we use the interface to manipulate them, the page that loads them doesn't need to know anything about what they do.

Best way to save extra data for user in Drupal 6

I am developing a site that is saving non visible user data upon login (e.g. external ID on another site). We are going to create/save this data as soon as the account is created.
I could see us saving data using
the content profile module (already in use on our side)
the profile module
the data column in the user table
creating our own table
I feel like #1 is probably the most logical place, however creating a node within a module does not seem to be a trivial thing.
#3 feels like a typical way to solve this, but just having a bunch of serialized data in a catchall field does not feel like the best design.
Any suggestions?
IMO, each option has its pro's and con's, and you should be the one to make the final call, given that you are the only one to know what your project is about, what are the critical points of the project, what is the expected typical user pattern, what are the resources available, etc...
If I was totally free to chose, my personal favourites would be option #4, #1 and #5 (wait! #5? Yes: see below!). My guiding principles in making the choice would be:
Keep it clean
Keep it simple
Make it extensible
#1 - The content profile module
This would be a clean solution in that you would make easier for developer to maintain your code, as all the alteration to the user would pass through the same channel, and it would be easier to track down problems or add new functionality.
I do not find it particularly simple as it requires you to interact with the custom API of that module.
As for extensibility that depends from how well the content profile module API is designed. The temptation could be to simply use the tables done by said module for your purpose, bypassing the API's but that exposes you to the possibility that on a critical security update one day in which you are in a hurry the entire system will break down because the schema has changed...
#4 - Creating your own table
This would be a clean solution because you could design your table (and your module to do exactly what you need to), and you could create your own API to be used by other modules. On the other hand you would introduce yet another piece of code altering the registration process, and this might make it more difficult for devs to track problems and expand the system in a consistent way.
This would be very simple to implement code-wise. Also the DB design would benefit though: another thing to consider is that the tables would be very easy to inspect and query. Creating a new handler for views is dead easy in most of the cases: 4 out of 5 you simply use one of the prototype objects shipping with views.
This would be extremely easy to extend, of course. Once you created the module for one field, you could manage as many as you want by mostly copy/pasting the code for one field to another (or to inherit from the same ancestor if you go OOP).
I understand you are already knowledgeable about drupal, but if you need a pointer on how to do that, I gave some direction in this other answer.
#5 - Creating your own table and porting already existing fields there
That would essentailly be #4 minus the drawback of scattering functionality across various modules... Of course if you are already managing 200 fields the other way this is not a viable option, but if you are early into your design, you could consider this.
In my experience nearly every project that requires system integration (meaning: synchronising data for the same user in multiple systems) has custom needs for user registration, and I found this solution the one that suits my need best for two reasons:
I found I reuse a lot of the custom code I wrote from project to project.
It's the most flexible way to integrate data with the other system (in some case I even split data for the user in two custom tables managed by the same module: one that contains custom fields used by drupal only and one that contains the "non visible fields", as you call them. I find this very handy in a lot of scenarios, as it makes very easy to inspect and manipulate the data of the two systems separately.
HTH!
If you're already using the Content Profile module, then I'd really suggest continuing to use it and attach the field to it. You're saving the data when the account is created, and creating a node for the user at the same time isn't that hard. Really.
$node = new stdClass();
$node->title = $user->name; // what I'd use, or you can have node auto title handle the title for you. Up to you!
$node->field_hidden_field[0]['value'] = '$*#$f82hff';
$node->uid = $user->uid;
node_save($node);
Bob's your uncle!
I would go for option 3. Eventually even other modules will store the data in the database against a user. So you could directly save it yourself probably in a more efficient way than those modules.
Of course depending on the size of the data, you can take a call whether to add a new column in the users table or to create a new table for your data with the user's id as the foreign key.

Resources