What is the best way to sandbox collections in meteor? - meteor

I want to isolate each user's data from all other users. What is the best way to keep a user viewing and modifying only her stuff? My approach has been to
Add a userId field on every collection
Configure every published collection to filter on userId.
Use simple-schema with collections2 and add autoValue: function(d) { return this.userId } on the userId field for each schema to force the userId during validation.
Is this a good and correct approach? What is best practice?

#dk. this sounds like a good approach to me and is considered best practice as well (as of my experience with meteor).

Sounds solid. Actually I use this approach in a quite large project I'm working on.

I also use composite collections using the reywood:publish-composite package.
As a result some collections don't have userId keys but the documents for the current user are selected based on a document from a related collection. I'm also dealing with the case that many documents are shared between users.
This affords some degree of normalization while working really well.

Related

How to do updates with GraphQL mutations(Hot Chocolate)

We recently introduced GraphQL to our project, I've never used it before, however I like it a lot.
We decided to go with the HotChocolate library for .NET Core and Apollo for client side.
One thing I am not quite sure is about mutations, specifically peforming updates and partial updates with mutations.
I read somewhere that the practice is and that I should stick with creating specific mutation for each update, for instance updateUsername(), updateAddress(), updateCity() all of them should have specific mutation.
Issue with that is that my codebase will grow enormously if I decide to go in that direction, as we are very much data driven, with a lot of tables and columns per table.
Another question is, how to handle nullable properties, I can create a mutation which accepts some input object, but I'll end up with my entity being overwritten and all nullable properties not provided on the calling end will be set to null.
Is there a way to handle this update partially or I should go with specific update mutation for each property I want updated?
I think you understood the best practice around specific mutations wrong. It's less "have one mutation to update one field" and more "have specific mutations that encapsulate actions in your domain". A concrete example would be creating an "addItemToBasket" mutation, instead of having 3 mutations that update the individual tables related to your shopping basket, etc.
GraphQL is very much centered around front-end development, so your mutations should, in general, closely resemble actions a user can perform in your front-end. E.g. your front-end has an "Add to basket" button and your backend has an "addItemToBasket" mutation that resembles the action of placing an item in the user's basket.
If you design your mutations with this in mind, most of the time you shouldn't be having an issue with partial updates, since the mutation knows exactly what's to do and you are not just letting the user of your schema update fields at will.
If for whatever reason you need to have these partial updates, you likely won't get around implementing the patching yourself, unless your data provider supports it. Meaning you will have to have an input object type with nullable properties and your mutation that decides which fields have been changed and changing them using your data provider.
That being said, there's also a proposal for patching types in the Hot Chocolate repository that should simplify the patching part: https://github.com/ChilliCream/hotchocolate/issues/1326
for instance updateUsername(), updateAddress(), updateCity() all of them should have specific mutation.
Issue with that is that my codebase will grow enormously if I decide
to go in that direction, as we are very much data driven, with a lot
of tables and columns per table.
Correct. That's practically impossible to follow that way for more or less big data-driven applications.
Consider how we implement the patching in our API here. Also consider following the discussion about the patching feature in HotChocolate github thread. Hope, that helps!

How do I store related firebase data?

I'm making a firebase app where there's the concept of posts and authors.
I need to have a field called postedBy to give information about the post author. I'm currently confused on how to best implement this. Here's what I've thought about...
Store a postedBy field with the post author's ID as value. My issue with this is that I have to further send single requests for the user information, like name, profile picture etc.
I store a postedBy field with the exact clone of the author's data (name, profile URL, etc). My issue with this is what if the user changes their profile information? Do I have to loop through all the posts data to also ensure the changes?
What is the best way to solve an issue like this?
For your case, I would say that the best option would be probably to use only the ID as a value. This way, you can perform queries to return values using only the ID.
Using this approach should be the simplest and easiest way for you to create your application. Since the ID will be the point to connect your tables and to perform the queries, it should make your work easier.
In the below article, there is an example of a Blog application, that you can take a look and get some insights on how to configure your application as well - mainly about this part about author and post. :)
Learning Firebase: Creating a blog article object
Let me know if the information helped you!

Symfony2 Multiple user Types

Hello I have a technical question on how to implement multiple user types (not using FOS). I'll explain what I'm willing to do, I'd like some feedback on what can be done better or a suggestion on how to do it.
I'll have 4 entities
user
amateur
profesional
organisation
The default one that is implemented right now is user. To get things out of the way I'll user a second user provider for organisation as it will not be related to the previous ones.
I'm wondering how to do the User, Amateur, Profesional accounts.
They share most of the basic fields just have additional information to them. Most of them have the same permissions. Whats the best way to do It?
1) Adding a one to one extra entity to the User Class
I'll add a one to one AmateurInformation and ProInformation to the main User class. And determine the account type by a single accountType field.
2) Extending the Class with a child class?
I don't even know that something like this is possible. Also I'd like to make requests to the Repo that will return all User types.
Again I'm looking for suggestions on what to look into and how I can do this. I want to improve my knowledge while I'm at it :)
In my view two suggested solutions are viable (introducing a type field or using inheritance), really depends on your situation.
I would avoid using multiple independent classes, it will just complicate it.
If the relation to other database entries does not justify, it is not required to create separate classes, you can use the type field and maybe introduce specific roles (see getRoles() in UserInterface) for your different users.
If your user classes will have access to different things (relations to other entities), I would consider using a Single Table Inheritance with doctrine. With this solution you still can use one UserProvider and in code you can check the user object, and also, you can use custom roles to give permissions:
if ($user instanceOf Amateur)

Meteor Approach to Collections Considering Join Aren't Supported by Core Yet?

I'm creating my main project functionality right now so it's kind of a big decision to make in my project, I want efficient & scalable solution. I use different API's to fetch users products ultimately for 1 collection to display products information inside a table with possible merge by SKU TITLE from different sources.
I have thought of 2 approaches (In both approaches we add Meteor.userId() to collection insert so each users has it's own products:
1) to create each API it's own collection and fetch the products to it, after or in middle of the API query where I insert it to sourceXProducts also add the logic of merge products by sku and add it to main usersProducts Only the fields I need, and we have the collection of the sourceXproducts if we ever need anything we didn't really include to main usersProducts we can query it and get it so we basically keep all the information possible (because it can come handy)
source1Products = new Meteor.Collection('source1Products');
source2Products = new Meteor.Collection('source2Products');
usersProducts = new Meteor.Collection('usersProducts');
Pros: Honestly I'm not sure, It makes it organized also the way I learned Meteor it seems to be used a lot.
Cons: Meteor collection joins is not supported in core yet, So I have to use a meteor package such as: meteor-publish-composite which seems good but this way might hit performance
2) Create 1 collection and just insert everything the API resonse has and additional apiSource field so we can choose products from X user X api.
usersProducts = new Meteor.Collection('usersProducts');
Pros: No joins, possibly better performance
Cons: Not organized, It can become a large collection maybe it's not good for mongodb
3) Your ideas? :)
First, you should improve the question. You do not tell us anything precise about your schema. What are the entities you have and what type of relations are there and what type of joins do you think you will be doing. How often you will be doing them?
Second, you should rethink your schema and think in the terms of a non-relational database. I see many people coming from SQL world and then they simply design their schema in the same way. Wrong. MongoDB is not SQL and things you learned there you should not try to just reuse here. You should start using features like subdocuments and arrays which can help you solve many basic things you would do in SQL with joins. So, knowing your schema would help us help you design the schema. For example, see this answer and especially the comments for the discussion for a similar type of question you are asking here.
Third, have you evaluated various solutions which exist out there? There are many, but you have not shown us that you tried any of them and how it worked for you. What were pros and cons of them, for you and your project?
Fourth, if you are lazy to evaluate, you can just use peerlibrary:peerdb and peerlibrary:related. They are simply perfect. You should trust me. I am their author.

Ways to circumvent a bad database schema?

Our team has been asked to write a Web interface to an existing SQL Server backend that has its roots in Access.
One of the requirements/constraints is that we must limit changes to the SQL backend. We can create views and stored procedures but we have been asked to leave the tables/columns as-is.
The SQL backend is less than ideal. Most of the relationships are implicit due to a lack of foreign keys. Some tables lack primary keys. Table and column names are inconsistent and include characters like spaces, slashes, and pound signs.
Other than getting a new job or asking them to reconsider this requirement, can anyone provide any good patterns for addressing this deficiency?
NOTE: We will be using SQL Server 2005 and ASP.NET with the .NET Framework 3.5.
Views and Synonyms can get you so far, but fixing the underlying structure is clearly going to require more work.
I would certainly try again to convince the stakeholder's that by not fixing the underlying problems they are accruing technical debt and this will create a slower velocity of code moving forward, eventually to a point where the debt can be crippling. Whilst you can work around it, the debt will be there.
Even with a data access layer, the underlying problems on the database such as the keys / indexes you mention will cause problems if you attempt to scale.
Easy: make sure that you have robust Data Access and Business Logic layers. You must avoid the temptation to program directly to the database from your ASPX codebehinds!
Even with a robust database schema, I now make it a practice never to work with SQL in the codebehinds - a practice that developed only after learning the hard way that it has its drawbacks.
Here are a few tips to help the process along:
First, investigate the ObjectDataSource class. It will allow you to build a robust BLL that can still feed controls like the GridView without using direct SQL. They look like this:
<asp:ObjectDataSource ID="ObjectDataSource1" runat="server"
OldValuesParameterFormatString="original_{0}"
SelectMethod="GetArticles" <-- The name of the method in your BLL class
OnObjectCreating="OnObjectCreating" <-- Needed to provide an instance whose constructor takes arguments (see below)
TypeName="MotivationBusinessModel.ContentPagesLogic"> <-- The BLL Class
<SelectParameters>
<asp:SessionParameter DefaultValue="News" Name="category" <-- Pass parameters to the method
SessionField="CurPageCategory" Type="String" />
</SelectParameters>
</asp:ObjectDataSource>
If constructing an instance of your BLL class requires that you pass arguments, you'll need the OnObjectCreating link. In your codebehind, implement this as so:
public void OnObjectCreating(object sender, ObjectDataSourceEventArgs e)
{
e.ObjectInstance = new ContentPagesLogic(sessionObj);
}
Next, implementing the BLL requires a few more things that I'll save you the trouble of Googling. Here's an implementation that matches the calls above.
namespace MotivationBusinessModel <-- My business model namespace
{
public class ContentPagesItem <-- The class that "stands in" for a table/query - a List of these is returned after I pull the corresponding records from the db
{
public int UID { get; set; }
public string Title { get; set; } <-- My DAL requires properties but they're a good idea anyway
....etc...
}
[DataObject] <-- Needed to makes this class pop up when you are looking up a data source from a GridView, etc.
public class ContentPagesLogic : BusinessLogic
{
public ContentPagesLogic(SessionClass inSessionObj) : base(inSessionObj)
{
}
[DataObjectMethodAttribute(DataObjectMethodType.Select, true)] <-- Needed to make this *function* pop up as a data source
public List<ContentPagesItem> GetArticles(string category) <-- Note the use of a generic list - which is iEnumerable
{
using (BSDIQuery qry = new BSDIQuery()) <-- My DAL - just for perspective
{
return
qry.Command("Select UID, Title, Content From ContentLinks ")
.Where("Category", category)
.OrderBy("Title")
.ReturnList<ContentPagesItem>();
// ^-- This is a simple table query but it could be any type of join, View or sproc call.
}
}
}
}
Second, it is easy to add DAL/BLL dlls to your project as additional projects and then add a reference to the main web project. Doing this not only gives your DAL and BLL their own identities but it makes Unit testing a snap.
Third, I almost hate to admit it but this might be one place where Microsoft's Entity Framework comes in handy. I generally dislike the Linq to Entities but it does permit the kind of code-side specification of data relationships you are lacking in your database.
Finally, I can see why changes to your db structure (e.g. moving fields around) would be a problem but adding new constraints (especially indices) shouldn't be. Are they afraid that a foreign key will end up causing errors in other software? If so...isn't this sort of a good thing; you have to have some pain to know where the disease lies, no?
At the very least, you should be pushing for the ability to add indexes as needed for performance reasons. Also, I agree with others that Views can go a long way toward making the structure more sensible. However, this really isn't enough in the long run. So...go ahead and build Views (Stored Procedures too) but you should still avoid coding directly to the database. Otherwise, you are still anchoring your implementation to the database schema and escaping it in the future will be harder than if you isolate db interactions to a DAL.
I've been here before. Management thinks it will be faster to use the old schema as it is and move forward. I'd try to convince them that a new schema would cause this and all future development to be faster and more robust.
If you are still shot down and can not do a complete redesign, approach them with a compromise, where you can rename columns and tables, add PKs, FKs, and indexes. This shouldn't take much time and will help a lot.
short of that you'll have to grind it out. I'd encapsulate everything in stored procedures where you can add all sorts of checks to enforce data integrity. I'd still sneak in all the PK, FK, indexes and column renames as possible.
Since you stated that you need to "limit" the changes to the database schema, it seems that you can possibly add a Primary Key field to those tables that do not have them. Assuming existing applications aren't performing any "SELECT *..." statements, this shouldn't break existing code. After that's been done, create views of the tables that are in a uniform approach, and set up triggers on the view for INSERT, UPDATE, and DELETE statements. It will have a minor performance hit, but it will allow for a uniform interface to the database. And then any new tables that are added should conform to the new standard.
Very hacky way of handling it, but it's an approach I've taken in the past with a limited access to modify the existing schema.
I would do an "end-around" if faced with this problem. Tell your bosses that the best way to handle this is to create a "reporting" database, which is essentially a dynamic copy of the original. You would create scripts or a separate data-pump application to update your reporting database with changes to the original, and propagate changes made to your database back to the original, and then write your web interface to communicate only with your "reporting" database.
Once you have this setup in place, you'd be free to rationalize your copy of the database and bring it back to the realm of sanity by adding primary keys, indexes, normalizing it etc. Even in the short-term, this is a better alternative than trying to write your web interface to communicate with a poorly-designed database, and in the long-term your version of the database would probably end up replacing the original.
can anyone provide any good patterns for addressing this deficiency
Seems that you have been denied the only possibility to "address this deficiency".
I suggest adding the logic of integrity check into your coming stored procedures. Foreign keys, unique keys, it all can be enforced manually. Not a nice work but doable.
One thing you could do is create lots of views, so that the table and column names can be more reasonable, getting rid of the troublesome characters. I tend to prefer not having spaces in my table/column names so that I don't have to use square brackets everywhere.
Unfortunately you will have problems due to the lack of foreign keys, but you may want to create indexes on the column, such as a name, that must be unique, to help out.
At least on the bright side you shouldn't really have many joins to do, so your SQL should be simple.
If you use LinqToSql, you can manually create relationships in the DBML. Additionally, you can rename objects to your preferred standard (instead of the slashes, etc).
I would start with indexed views on each of the tables defining a primary key. And perhaps more indexed views in places where you want indexes on the tables. That will start to address some of your performance issues and all of your naming issues.
I would treat these views as if they were raw tables. This is what your stored procedures or data access layer should be manipulating, not the base tables.
In terms of enforcing integrity rules, I prefer to put that in a data access layer in the code with only minimal integrity checks at the database layer. If you prefer the integrity checks in the database, then you should use sprocs extensively to enforce them.
And don't underestimate the possibility of finding a new job ;-)
It really depends how much automated test coverage you have. If you have little or none, you can't make any changes without breaking stuff (If it wasn't already broken to start with).
I would recommend that you sensibly refactor things as you make other changes, but don't do a large-scale refactoring on the DB, it will introduce bugs and/or create too much testing work.
Some things like missing primary keys are really difficult to work-around, and should be rectified sooner rather than later. Inconsistent table names can basically be tolerated forever.
I do a lot of data integration with poorly-designed databases, and the method that I've embraced is to use a cross-referenced database that sits alongside the original source db; this allows me to store views, stored procedures, and maintenance code in an associated database without touching the original schema. The one exception I make to touching the original schema is that I will implement indexes if needed; however, I try to avoid that if at all possible.
The beauty of this method is you can isolate new code from old code, and you can create views that address poor entity definitions, naming errors, etc without breaking other code.
Stu

Resources