I've been using the global module Log.Global and the functions info and debug but my application is growing larger and I need more flexibility. Specifically, I'd like to be able to enable/disable events from different components of the system. I'm wondering what is considering best practice in general (I mention Async since it's the API I'm using but the question is more general).
It seems the API provides at least two ways to do it. There is a ?tags:(string * string) list optional parameters. When used, it appends tags after the log message.
2017-08-23 17:35:38.090225+07:00 Info starting my program -- [foobar: foobar]
I could use that and then filter my log based on the events I want to see. How are tags used in general, are they typically literals or are there other ways of using them?
I could also use more than one logger. For instance, the functor Make_global () : Global_intf creates singleton. I could create one logger for each component. Is there a good reason to choose this approach? besides having different output for each log and setting the log level independently.
I'm using Protobuf 3 along with gRPC in distributed environment ("microservices").
Due to lack of supporting not-set/missing values in Protobuf 3 I got the following issue related to contract additivity.
Imagine I have Service A and couple of consumer services B and C owned by Team B and Team C.
If I add a field, say, boolean value to contract of Service A, at the first it will have default value which will be written, say, to database as is.
Then, Team B updates their service to talk using updated contract and passes 'true' as the field value.
Then, Team C still uses old contract and calls the same service - value gets replaced to false. But Team C didn't mean it, moreover they weren't aware about that field at all.
Thus, Service A cannot extend contract at all because consumers that didn't get updated for various reasons yet are able to harm data and the Service A can do nothing about it.
In Thrift such things are done just by single check (.isSet()).
There are dirty workarounds like wrapping primitives into objects but it forces to use library-implementation-specific checks-by-reference (at least in java) which seems to be rather poor hack than robust solution. Also, eventually, I have to wrap everything in wrappers, which as you imagine is not great solution as well.
What are best practices you use to manage such situations in Protobuf 3 in 2017? How do you manage/coordinate contract updates between teams/services? Thanks
Note: this question is not exactly about how to implement absence of detection for not-set/missing values, but rather about how to live with that and follow Protobuf 3 philosophy.
I think the problem here is that trying to check for field presence this way is not really an idiomatic use of protocol buffers (not even in proto2). It sounds like you are trying to evolve your schema by adding new fields but not reading those new fields unless you're sure they came from an updated client. The idiomatic way is to do this instead: just make sure the defaults for the new fields are reasonable and maintain compatible behavior if they're not explicitly set. Then don't try to check for presence--just read the fields and older clients will get good default behavior.
To give you an example, let's say you're adding a new feature that can be enabled or disabled. The right way to do this would be to add a bool field in your request message called enable_new_feature. Since older clients don't know about this field, their requests will have this default to false and so they get the old behavior they're expecting. Adding a disable_new_feature field instead would probably be the wrong way to do it because then you would indeed break older clients by enabling something they didn't want.
Using oneof looks like a better/cleaner alternative to wrappers. See this answer to a similar question: https://stackoverflow.com/a/40552570/618259
What would be a proper way to handle global "settings" in my sailsjs application? The user will want to change those settings via the web front of my app.
I imagine I could use a new model "GlobalSettings" with only one item, but I don't really know if it's a good "MVC" practice.
Since it is based on user input, it has to be stored in a database and therefore storing it in model seems like a right choice to me.
Having just 1 row/collection is completely ok in my opinion, especially in the no-SQL field. But for more reusability and scalability, you might want to consider to actually store each setting in invididual row, that might give you space to expand the usability of it in the future.
In my own opinion, I always find as a web app develops, you will start to realize there are more and more fields that you want the user to setup as their preference, as a good practice to relax the application.
For me I usually setup a meta_data model with name, value, criteria, and some other fields.
For example, when viewing your web page, 'Alice' may want a background color of black, 'Bob' may want a background color of green. Then you can let them modify or insert row into this meta_data collection. Then in your database, you will have
name value criteria
background_color black user_name='Alice'
background_color green user_name='Bob'
and it can be all kinds of values.
of course if you just have one value that can be changed by all of your users, it is probably a good idea to know who updated them. For this you would want to create a trigger, (if you are using a sql database)see trigger in mysql, so that every update on the table will trigger a function that stores what was changed and who changed it in another table
So yes, to answer your question, it is totally ok to have a model to store a value, and don't worry about only have one row, you will have more as you develop your app.
The config/globals.js file seems to be a good place to place a global configuration setting.
For convenience, Sails exposes a handful of global variables. By
default, your app's models, services, and the global sails object are
all available on the global scope; meaning you can refer to them by
name anywhere in your backend code (as long as Sails has been loaded).
Nothing in Sails core relies on these global variables - each and
every global exposed in Sails may be disabled in sails.config.globals
(conventionally configured in config/globals.js.)
Sailsjs.org Documentation - Globals
Alternatively you can use the sails.config object.
Custom Configuration
Sails recognizes many different settings, namespaced under different top level keys (e.g. sails.config.sockets and
sails.config.blueprints). However you can also use sails.config for
your own custom configuration (e.g.
sails.config.someProprietaryAPI.secret).
From the docs
There is also services which is global by default.
Overview
Services can be thought of as libraries which contain functions that you might want to use in many places of your application. For example,
you might have an EmailService which wraps some default email message
boilerplate code that you would want to use in many parts of your
application. The main benefit of using services in Sails is that they
are globalized--you don't have to use require() to access them.
It really depends on what kind of global you are wanting.
Lets try this explanation again...
I'm new to polymer (and getting back into web dev after a relatively long absence), and I'm wondering what the recommended approach might be to more closely manage object state while employing 2 way databinding. I am currently consuming rest API (json) objects. My question is if polymer keeps a copy of the original object before initiating updates to the bound object's properties/attributes...so one might be able to easily undo the changes? While allowing 2 way databinding to work its magic is often desired, there are cases where I'd like to prevent / delay changes to the object / DOM until the user approves the changes (say via the paper-dialog component for instance). I suppose one could make a temporary copy of the object and bind fields to that version, and then only persist the changes back to the source object upon user approval. In any case, I'd be interested to hear thoughts and see an example or two of recommended approaches (especially if I am off-track with my ideas!)
I suppose one could make a temporary copy of the object and bind
fields to that version, and then only persist the changes back to the
source object upon user approval
This.
Consider that view-models are essentially different from pure data-models (sometimes called business-data). Frequently, the differences are irrelevant and one can use them interchangeably. However, be aware of scenarios where the view-model is distinct (uncommitted user edits are a good example).
The notion of a field editor that requires approval from the user is purely UI/View oriented. Whatever data is managed in that modality is purely in the domain of the view, and fetches/commits to the business-data should be discrete.
So, I need some input refactoring an asp.net (c#) application that is basically a framework for creating dynamic forms (any forms). From a high level point of view, there is a table that has the forms, and then there is a table that has all the form fields, where it is one to many between the two. There is a validation table, where each field can have multiple types of validation, and it is a one to many from the form fields table to the validation table.
So the issue is that this application has been sold as the be-all-end-all customizable solution to all the clients. So, the idea is whatever form they want, we can build it jsut using DB configurations. The thing is, that is not always possible, because there is complex relationship between the fields, and complex relationship between the forms themselves. Also, there is only once codebase, and this is for multiple clients - all of whom host it on their own. There is very specific logic for each of the clients, and they are ALL in the same codebase, with no real separation. Sometimes it was too difficult to make it generic, so there are instances where it has hard coded logic (as in if formID = XXX then do _). You can also have nested forms, as in, one set of fields on its own within each form.
So usually, when one client requests a change, we make the change and deploy it to that client - but then another client requests a different change, and we make the change and deploy it for THAT client, but the change from the earlier client breaks it, and its a headache trying to debug, because EVERYTHING is dynamic. There is no way we can rollback the earlier change, because then the other client would be screwed.
Its not done in a real 3-tier architecture - its a web site with references to a DB class, and a class library. There is business logic in the web site itself, in the class library, and the database stored procs (Validation is done in the stored procs).
I've been put in charge of re-organizing the whole thing, and these are my thoughts/questions:
I think this is a bad model in general, because one of the things I heard one of the developers say is that anytime any client makes a change, we should deploy to everybody - but that is not realistic, if we have say 20 clients - there will need to be regression testing on EVERYTHING, since we don't know the impact...
There are about 100 forms in total, and their is some similarity in them (not much). But I think the idea that a dynamic engine can solve ALL form requests was not realistic as well. Clients come up with the most weird requests. For example, they have this engine doing a regular data entry form AND a search form.
There is a lot of preserving state between pages, and it is all done using session variables, which is ok, except that it is not really tracked, and so sessions from the same user keep getting overwritten, and I think sessions should be got rid of.
Should I really just rewrite the whole thing? This app is about 3 years old, and there has been lots of testing and things done, and serious business logic implemented, so I hate to get rid of all that (joel's advice). But its really a mess of a sphagetti code, and everything takes forever to do, and things break all the time because of minor changes.
I've been reading Martin Fowlers "Refactoring" and Michael Feathers "working effectively with legacy code" - and they are good, but I feel they were written for an application that was 'slightly' better architected, where it is still a 3-tiered architecture, and there is 'some' resemblance of logic..
Thoughts/input anyone?
Oh, and "Help!"
My current project sounds like almost exactly the same product you're describing. Fortunately, I learned most of my hardest lessons on a former product, and so I was able to start my current project with a clean slate. You should probably read through my answer to this question, which describes my experiences, and the lessons I learned.
The main thing to focus on is the idea that you are building a product. If you can't find a way to implement a particular feature using your current product feature set, you need to spend some additional time thinking about how you could turn this custom one-off feature into a configurable feature that can benefit all (or at least many) of your clients.
So:
If you're referring to the model of being able to create a fully customizable form that makes client-specific code almost unnecessary, that model is perfectly valid and I have a maintainable working product with real, paying clients that can prove it. Regression testing is performed on specific features and configuration combinations, rather than a specific client implementation. The key pieces that make this possible are:
An administrative interface that is effective at disallowing problematic combinations of configuration options.
A rules engine that allows certain actions in the system to invoke customizable triggers and cause other actions to happen.
An Integration framework that allows data to be pulled from a variety of sources and pushed to a variety of sources in a configurable manner.
The option to inject custom code as a plugin when absolutely necessary.
Yes, clients come up with weird requests. It's usually worthwhile to suggest alternative solutions that will still solve the client's problem while still allowing your product to be robust and configurable for other clients. Sometimes you just have to push back. Other times you'll have to do what they say, but use wise architectural practices to minimize the impact this could have on other client code.
Minimize use of the session to track state. Each page should have enough information on it to track the current page's state. Information that needs to persist even if the user clicks "Back" and starts doing something else should be stored in a database. I have found it useful, however, to keep a sort of breadcrumb tree on the session, to track how users got to a specific place and where to take them back to when they finish. But the ID of the node they're actually on currently needs to be persisted on a page-by-page basis, and sent back with each request, so weird things don't happen when the user is browsing to different pages in different tabs.
Use incremental refactoring. You may end up re-writing the whole thing twice by the time you're done, or you may never really "finish" the refactoring. But in the meantime, everything will still work, and you'll have new features every so often. As a rule, rewriting the whole thing will take you several times as long as you think it will, so don't try to take the whole thing in a single bite.
I have a number of similar apps for building dynamic forms that I support.
There's a whole lot of things you could/could not do & you're right to think hard before throwing away 3 years of testing/development.
My input for you to consider is to implement a plug-in architecture on top of what you're got. Any custom code for a form goes in the plug-in & the name of this plug-in is stored with the form. When you generate a form, the correct plug-in is called to enhance the base functionality. that way you get to move all the custom code out of the existing library. It should also mean less breaking changes, each plug-in only affects the form it's attached to.
From that point it'll be easy to refactor the core engine as it's common functionality across all clients & forms.
Since your application seems to have become a big ball of mud, a complete (or an almost complete rewrite) might make sense.
You should also take into account new technologies like document-oriented databases (couchDB, MongoDB)
Most of the form definitions could probably fit pretty well in document-oriented databases. For exemple:
To define a customer form, you could use a document that looks like:
{Type:"FormDefinition",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName",
FieldType:"String",
Validations:[
{ValidationType:"Required"},
{ValidationType:"StringLength", Minimum:15, Maximum:50},
]},
...
{FieldName:"CustomerType",
FieldType:"Dropdown",
PossibleValues: ["Standard", "Valued", "Gold"],
DefaultValue: ["Standard"]
Validations:[
{ValidationType:"Required"},
{
ValidationType:"Custom",
ValidationClass:"MySystem.CustomerName.CustomValidations.CustomerStatus"
}
]},
...
]
};
With this kind of document to define your forms, you could easily add forms and validations which are customer specific.
You could easily add subforms using a fieldtype of SubForm or whatever.
You could define FieldTypes for all common types of fields like e-mail, phone numbers, address, etc.
namespace System.CustomerName.CustomValidations {
class CustomerStatus: IValidator {
private FormContext form;
private List<ValidationErrors> validationErrors;
CustomerStatus(FormContext fc) {
this.validationErrors = new List<ValidationErrors>();
this.form = fc;
}
public List<ValidationErrors> Validate() {
if (this.formContext.Fields["CustomerType"] == "Gold" && Int.Parse(this.form.Fields["OrderCount"]) < 10) {
this.validationErrors.Add(new ValidationError("A gold customer must have at least 10 orders"))
}
if (this.formContext.Fields["CustomerType"] == "Valued" && Int.Parse(this.form.Fields["OrderCount"]) < 5) {
this.validationErrors.Add(new ValidationError("A valued customer must have at least 5 orders"))
}
return this.validationErrors;
}
}
}
A record of a document with that definition could look like this:
{Type:"Record",
EntityType: "Customer",
Fields: [
{FieldName:"CustomerName", Value:"ABC Corp.",
{FieldName:"CustomerType", Value:"Gold",
...
]
};
Sure, this solution is a lot of work, but if/when realized it could be really easy to create/update/customize forms.
This is a common but (IMO) somewhat naive design approach. "Instead of solving the customer's problem, let's build a tool to let them solve their own problems!". But the reality is, that generally customers want YOU to solve their ACTUAL problems. So build things that solve their problems.
If you can architect it in a way that allows you to reuse some parts for different customers, fine. But that is generally what the frameworks have done for you already - work out the common features that applications need and make them available in neat packages.