I have two questions relation the use of ngrx/store:
I have a big application, with different modules like customers, suppliers, etc, that are lazy loading.
May I have a store by module or a global store? It is possible to dinamically inject the local stores to the global where a module is called?
I understand the use of a local store. But what is the good moment to update the server database. That's why it's better to use ngrx/effects?
Thanks for you answers.
Marcos
Related
I'm supposed to make CMF feed a Redis queue which will then be polled by other servers. I'm still learning about CMF and it has become a little overwhelming to understand it. I've been using plain Symfony2 for a while, though.
I understand CMF can save the changes I made in the WYSIWYG editor as XML in the database. How much control do I have over this? Is there any project trying to interface CMF and Redis (or another non-doctrine database)?
I'm guessing I can implement a controller that would fetch these edited fragments from the database and send push them to Redis. But the fragments are in XML. Is there anything already built to fetch this data?
I appreciate any pointers. Thank you.
First lets briefly separate two things, the CMF is a set of components and Bundles that can largely be used independent of each other. All of them are storage agnostic but many currently only ship with support for PHPCR.
PHPCR in turn is a content repository interface for CMS which supports tree structures, full text search etc.
The reference implementation of that is called Jackalope. Jackalope in turn provides different so called "transports". You seem to be looking at the Doctrine DBAL transport for Jackalope which in deed stores XML fragments into an RDBMS. There is another one which uses the Jackrabbit Java server.
At any rate, writing a Redis based transport for Jackalope is probably not what you want. From what I can read is you actually just want a queue stored on Redis? In that case I would just use this Bundle here https://github.com/snc/SncRedisBundle together with standard Symfony2.
If you also want CMS editing capabilities, you can easily add CMF based editing into any Symfony2 project. Of course you would then use Redis for your queue and one of the Jackalope transport layers for storage. So you would be using more than one database. But this is a sensible architecture.
I have the main website that uses a database to store and access user accounts. I'm using EF to manage the schema. I also defined site-specific POCOs and have migrated them to the database.
Now, what if I want a separate website, for example, a resource server (Web API) that would expose (with authorization) the same data set up on the main website?
Do I create the same POCOs and derived DbContext on the resource server again? That seems like duplicating work, though.
What if I wanted to create new POCOs on the resource server and reflect them onto that same database? Wouldn't that conflict with the current migration (which is saved on the database), then subsequently mess up the EF setup on the main website?
I've seen the suggestion of putting the POCOs and DbContexts in a library and have multiple projects reference that same library. This seems viable, however I'd have to hard-code the connection string, which seems dirty to me.
I'm starting to think that EF is probably not recommended for this kind of setup. It seems like a database-first approach plays better here - though I would have to manually reedit the data contexts (most likely, LINQ-SQL) for every database schema change.
Are there any lesser-known capabilities, facts, practices, etc., for/about EF that would help in this situation?
Generally, you can avoid duplication by having one API serving both sites and have resources version for each if needed. On the other hand, if you choose reuse and add approach, creating additional EF code-first entities should not interfere with other site data layer if modeled and mapped carefully. DbContext connection string does not have to be hard-coded.
Team A has an enterprise app that uses ADO.NET for data access that executes stored procedures. The data access is encapsulated in it's own project (let's call it DAL.dll)
Team B is creating another unrelated app that's reusing the stored procedures in the enterprise app. This app is currently using the MS application block for data access. The issue we run into is that whenever Team A make any change to the input/output params in the stored procedures, there is a runtime error in Team B's app and this app needs to be updated to accommodate the additional params (or params that were removed). So, most of these go unnoticed until a user complains. At the very least, we would like to have the app throw a compilation error so that the build process warns us of the changes made.
One way to do this is to have Team B's project add a reference to the DAL.dll
I'd like to know if there are any other cleaner ways of solving the issue. We are ready to replace Team B's MS Data application block to use a different technology (Entity Framework?) if necessary.
Among the other answers, I'd strongly suggest getting those stored procedures into source control, in a Database Project. You then may be able to use the features of your source control system to do several things:
Lock some of the code so that it cannot be changed
Give you notifications if the code is changed
Warn you if the stored procedures change in a way that would prevent them from being called
Branch the stored procedures so that each team can have their own version of changed code, while keeping the unchanged stored procedures common. You of course will need to separate the different versions in the database.
I agree with the other posters on this thread that you should not share stored procedure's across different .NET DLL's, that is just a recipe for disaster. I would also shy away from ORM's like Entity Framework if you are doing anything at all complicated with your database schema because ORM's excel at getting a simple object model translated from your .NET application classes into SQL tables and SP's, but traditionally do poorly at optimizing them for performance on the database side. There will be people who claim otherwise, and they may have a valid point if you are an expert in wrangling an ORM to do waht you want like they are, but chances are you are not and it will cause you headaches in the long run.
A shared data access layer might work, but conceptually you are then just changing the implementation of the dependency from some code that a DBA wrote to some code that a .NET programmer wrote. Yes, you can use integration tests to achieve better verifiability, but the same case could be made for SQL with tools like Red Gate's SQL Test. I would shy away from this approach if the two applications are already experiencing some sort of pain from sharing SP's. That is an indication that the dependency just should be done away with.
If it were up to me, I'd just make a new schema for Team B's app. You can read more about schemas in SQL Server here: MSDN Schema description for 2008 R2. You can think of them as namespaces for SQL Server but with some additional bells and whistles like permission and access control. Separating out your different applications into separate schemas on the same shared database will probably make for the most flexible implementation in the long run.
unrelated app that's reusing the stored procedures in the enterprise app
If these two application are really unrelated why are those sharing procedures or even the same database. I know this is a long read, but I recommend you to read this: A Better Path to Enterprise Architectures
The partioning concept in there relates to the bounded context in Domain driven design:
Multiple models are in play on any large project. Yet when code based on distinct models is combined, software becomes buggy, unreliable, and difficult to understand. Communication among team members becomes confusing. It is often unclear in what context a model should not be applied.
Therefore: Explicitly define the context within which a model applies. Explicitly set boundaries in terms of team organization, usage within specific parts of the application, and physical manifestations such as code bases and database schemas. Keep the model strictly consistent within these bounds, but don’t be distracted or confused by issues outside.
It is expected you end with problems when you don't explicitely deal with this. You're lucky you're seeing early failures, as it can turn into problems much harder to find on the long run.
Analyze the problem again with the above in mind. Consider if you're missing some explicit context where this common functionality should live.
My question is: which team owns the store procedured and the database shared? Usually as a good architecture/design, you should not have two different apps sharing same database / procedures.
A better way to share data/functionality between two different applications is through a services or API, so the team who owns the functionality would be responsible to maintain it.
Also, have a good communication between both teams is highly recommend.
Depending on the owner of the DAL project, you could host web services and share the API. That way, you separate the Data Access Layer from the business logic, which allows anyone to use the same DAL without having to publish it to each different location.
From my point of view, it looks like both Team A and Team B should share the same core model and look at Multitier architecture as a possible solution.
It sounds like it would make sense to create a shared DAL that both applications can share.
I would add unit tests (or really integration tests) to make sure the DAL is compatible with the apps after changes. That way your tests would fail if incompatible changes have been made
"I'd like to know if there are any other cleaner ways of solving the issue."
The cleanest way is for Team B to sit down with Team A and encapsulate the relevant business logic into a shared API. It doesn't matter so much how you implement that API; what does matter is that the API's interface is documented and versioned so everyone knows what to expect.
One reasonable mechanism for this in a .NET environment is to use Microsoft's WebAPI.
In short, the question of "how do we share a stored procedure?" is most likely looking at the wrong level of abstraction.
Lets say I want to use a different DB than Mongo in Meteor's back-end and also want to use a visualization lib like D3.js on the front-end.
Is that possible at the moment?
How complex would it be to add it by myself if not?
Thanks
https://github.com/meteor/meteor/tree/master/packages/mongo-livedata the documentation indicates that this would be the module to start with if you wanted to replace the database functionality.
You can substitute another database for MongoDB by providing a
server-side database driver and/or a client-side cache that implements
an alternative API. The mongo-livedata is a good starting point for
such a project.
-- http://docs.meteor.com/#data
Take a look at this project: https://github.com/austinrivas/meteor-postgresql. If you really need to use a database other than mongo meteor may not be the right choice unless you are experimenting. You can always aggregate data from another db to mongo which might make life easier.
I've been using D3 with meteor in the form of angularjs directives and binding the data to drive the visualizations to $scope. DDP makes keeping the data current in the d3 visualizations super convenient.
How can we extend the Alfresco database? I need to add new tables to the existing database structure.
Does alfresco support this?
thanks in advance,
sri..
I think changing the alfresco db model is never a good solution. Some alfresco upgrades are made using Schema Upgrade Scripts, and that could get messy.
Have you tried to extend the alfresco content model?
Alfresco support some data types, allowing you to persist data. The Web Script framework allow you manipulate all your data inside your content model.
If your data is not suitable for a "content model", I think you should create a new database to hold your data.
Well, it is just a database. So you can create as many new tables as you want just like you would in any other database.
Obviously Alfresco won't use them because it doesn't know them, but you can query the tables as you like.
Advices from alfresco engineers are do not touch alfresco database. Please take a look at this page.
http://forums.alfresco.com/forum/general/non-technical-alfresco-discussion/where-alfreso-user-details-are-stored-i-alfresco
Changing alfresco database is not recommended.Content Model will be the good way.If such requirement is mandatory than,
You can use spring with hibernate for database connection.Properties which is required for connecting database are all ready declared inside alfresco-global.properties which is located inside "tomcat/shared/classes/".
For Spring bean injection you can declare beans inside any file which ends with "-context" which resides inside "tomcat/shared/classes/alfresco/extension" folder.
I will still recommend developer to use content model.
Depending on your use case, you may or may not need to play directly with the[/a] data base. I think your use case should fall in one of the following:
Use case 1:
You need to setup some metadata on folders and/or documents. You may have to nest multiple levels of nodes with different sets of custom metadata on each level.
You probably need to extend alfresco models in order to define custom document/folder models that best suits your business requirements. Please check jpotts' tutorial to learn how to do so.
Use case 2:
You need to define multiple lists with different sets of properties, those lists may or may not be linked to some content in your alfresco repo.
You probably need to learn more about alfresco sites' datalists, once you do so you may be interested in learning how to extend OOTB alfresco content model, jpotts' tutorial would be a good starting point, and then you should be checking this tutorial in order to learn how to manage datalists in stand alone aikau apps/share pages.
Use case 3:
You need to leverage a relational database in order to define and leverage you complex business logic that do not fall in any of the use cases defined above.
Are you sure you do not want to code a brand new app leveraging a technology that you are familiar with and have it communicate with alfresco using RESTfull api/cmis/.... ?
Are you sure alfresco is THE way to go ? If so, and you still want to have your custom complex business model in a bare relational database:
Please consider using a separate database instance / database for your custom extension, this way you would be sure any new patch/upgrade to alfresco that may change database structure won't affect your extension (or at least wont give you hard time upgrading it)
If you are really tied to only 1 database instance / 1 database schema, you will probably want to precede your table names with some prefix and hope none of alfresco future upgrades would have new tables with the same prefix. You probably also need to make sure to manage your database config wisely (connection pools ..) so neither your alfresco instance nor your custom extension have to starve. (make sure you close the connections you are opening)
Alfresco and Activiti come with a database. It is not good to access the database directly. Doing so can cause unexpected locking behavior or exhaust connection pools on the DB. This turns out into performance problems and other kinds of issues are possible too. In case you want to update Alfresco or Activiti you can do it through APIs. Easy to extend, easy to customize and hassle free integration capability are some of the reasons which has made http://loganwinson.doodlekit.com/blog/entry/4249216/top-things-to-know-about-alfresco-development>Alfresco web development popular among businesses.