How set a MS Project to choose between two resources when leveling - ms-project

The company has two welders that are equal in skill and knowledge, we wish to set them as a resource to a same task.
And we already did:
On the field resource name, we put both "Richard;Pablo"
But in this way, after the leveling of resources, they share, at the same time, the work on the task. We wish that one of this two welders make all the task by himself, and the leveling tool be able to choose between them.
I already try using operators, to set the field resource name as "Richard" or "Pablo". But this field doesn't accept this type of argument.
Assigning as Resource Group rather than a resource
This guy seem to have a answer, although is a quite short explanation, and i couldn't make work.
I would realy apreciate any help on this.

Related

Social Network API design for Like, Comment, and other Interactions?

Suppose I have a website where I showcase people's artwork, recipes, and journals (and this list is likely to grow). I want people to be able to comment, like, flag, etc. all of them.
I'm not looking for a database schema. I am using ASP.net Web API but the platform should be irrelevant. I am passing an Authorization Bearer header to identify the user who wants to like, comment on, flag, etc. another user's work.
I currently have two "schemes" but I'm not sure what the advantages/disadvantages of each one is, so I can't make a good decision.
The first looks like this:
POST api/<entity>/{id}/<action>
For example, api/journal/21793/like, or api/recipe/1005/comment. Of course, each action would have the appropriate body that includes info to store to the back end.
Implementing it this way, though, would require us to write entityCount x interactionTypeCount functions (actions), which is very tedious, though the API call is friendly.
The second API scheme looks like this:
POST api/<action>
For example, api/like, api/comment - and yes, there would be no way to depict which object type and which record of the object type in the URL. So, if we were to comment on the recipe mentioned, we would call:
POST api/comment
{"id":"1005", "objectType":"recipe", "comment":"This was excellent!"}
This way, we call only 1 API to comment on anything in our system. So, each of the POST api/ functions would require the id and objectType properties in the body, plus whatever other required data as appropriate for the action. (I'm using JSON as an example).
I see that developers would have to know beforehand what our accepted objectType values are in order to post interactions to the right record. I'm not sure if this is option is a good idea.
This question might get flagged for being off-topic or open to debate, but I'm hoping someone can tell me more pros and cons of each approach indicated above so I can make a better decision. Better yet, offer a solution that is either totally different or combines aspects of the two approaches above.

When would you use 'Real' translation messages in Symfony2?

The Symfony documentation says:
Using Real or Keyword Messages This example illustrates the two
different philosophies when creating messages to be translated:
$translated = $translator->trans('Symfony2 is great');
$translated = $translator->trans('symfony2.great');
< snip >
The choice of which method to use is entirely up to you, but the "keyword" format is often recommended.
http://symfony.com/doc/current/book/translation.html
So when would you use 'Real' messages?
You really have to decide for yourself. It's a bit a matter of taste and a bit a matter of your translation workflow.
Real messages are good when you don't want the overhead of maintaining an additional translation file (for the origin language). Furthermore, if you forget to translate some of the messages, you'd still see a valid message in the origin language. It's also somewhat easier to translate from an original message rather than a keyword.
Keywords are better when messages are changing often, especially with long texts. You abstract away the purpose of a message from the actual text.
EDIT: there's one more scenario when you could argue that real messages are better than keys - when your website only supports one language but with multiple variations - like en_GB, en_US. Most of the messages will be the same, only few will vary. So most of the messages could be left as they are, and only the ones which are actually different between GB and US put into a translation files. It would require much less work compared to an approach with using keys (of course, assuming your messages don't change very often).
One usecase for the real format I could come up with is when messages are created by users via the UI — it would be silly to force them to come up with keywords for each phrase they want to translate.
I haven't had such a need yet, so I always use the keyword format.
For the most part I agree with #Jakub Zalas' answer, however, the last line is a bit off.
Keywords are better when messages may ever change - not just when changing often. This is outlined as well in the docs themselves:
The second method is handy because the message key won't need to be changed in every translation file if you decide that the message should actually read "Symfony2 is really great" in the default locale.
If the message changes and you haven't used a key but the message as key you have to change any code using this message to reflect that change. More places to change are more potential bugs. We have the ability to build in leverage by using message keys.
Real messages has no big interest. IMO you can use them if you are sure your application will always be mono-language and you want to gain a few minutes in development.
Keyword trans has the interest that if you have to translate your website, you'll see immediately if a translation is missing.
To facilitate translations, I personnaly use JMSTranslationBundle

How can I implement additional entity properties for Entity Framework?

We have a requirement to allow customising our core product and adding additional fields on a per client basis e.g. People entity some client wants to record their favourite colour etc. As far as I know we can't add properties to EF at runtime as it needs classes defined at startup. Each customer has their own database but we are deploying the same solution to all customers with all additional code. We are then detecting which customer they are and running customer specific services etc.
Now the last thing I want is to be forking my project or alternatively adding all fields for all clients. This would seem likely to become a nightmare. Also more often than not the extra fields would only be required in a very limited amount of place. Maybe some reports, couple of screens etc.
I found this article from Jermey Miller http://codebetter.com/jeremymiller/2010/02/16/our-extension-properties-story/ describing how they are adding extension properties and having them go from domain to the web front end.
Has anyone else implemented anything similar using EF? How did it work out? Are there any blogs/samples that anyone has seen? I am not sure if I am searching for the right thing even if someone could tell me the generic name for what we want to do that would help. I'm guessing it is a problem that comes up for other people.
Linked question still requires some forking or implementing all possible extensions in single solution because you are still creating strongly typed extensions upfront (= you know upfront what extensions customer wants). It is not generally extensible solution. If you want generic extensible solution you must leave strongly typed world and describe extensions as data.
You will need to use some metamodel. Your entity classes will contain only properties used by all customers and navigation property to special extension entity (additional table per every extensible entity) where you will be able to put additional properties as name / value pair (you can add other columns like type, validation, etc. if needed).
This will in general moves part of your model from hardcoded scenario to configuration based scenario and your customers will even be allowed to define extensions at runtime (if you implement such feature).

What are the rules for deciding when a new catalog should be created?

I'd like to learn about using catalogs correctly.
I have about 30 useful content types, about 50 indexes in catalog.xml, and about 45 metadatas. There are just three types which account for most of the site's data - and I may need millions of these. I've been reading, and there's lots to do, but I want to have the basic configuration right before I begin all that.
This page told me that any non-default indexes should not be added to the portal_catalog. I've even read people explaining how removing one, or two of the default indexes makes a performance difference.
My question is: what are the rules for dividing up the indexes into different catalogs, and for selecting which catalog(s) index which type(s)?
So far I have created one additional catalog, used to catalog all indexes for my 'site-setup' objects (which I have caused to no longer be indexed in portal_catalog). The site-setup indexes are very often used, but more rarely modified than others, so I thought it was correct to separate them from objects which are reindexed more often. I'm not sure if that's the main consideration though.
Another similar question (a good example of the kind of thing I want to solve): how would you handle something like secondary workflow review_state variables? I give each workflow's review_state variable an index (and search on them quite often), but some of my workflows are only used on just a few types. (my most prolific objects have secondary workflows...)
I'd be very grateful for advice!
Campbell
This won't cover everything but I'll bring up some points..
Anything not in the portal_catalog won't work with collections, folder_contents view, getFolderContents method, search, portlet collections, related items(I think) and anything else the assumes you're using the portal_catalog.
I like to use an additional catalog when I need to be able to query the data but it only affects a sub-set of the content objects.
Use collective.indexing to speed up indexing operations.
Mount the catalogs on their own mount points so you can cache them differently from the rest of the site(so you can cache the whole catalog). Then, you can even serve the the catalogs from dedicated zeoserver.
Also, if your content doesn't have to be cataloged by the portal_catalog(with all the constraints listed), you may even want to think about if you need it as a full-fledged (archetype|dexterity) type in the first place. You can use a more slim repoze.catalog to catalog arbitrary objects(which could be very simple data) for whatever your purpose is and get even more performance. Or better yet, look into Solr for indexing it for VERY good performance.
On more thing, depending on the type of data you're storing, you could even look into using a relational database for a data store. But I don't know what kind of queries, indexes, data, etc you have...
30 different types seems like a lot but I don't know what your use case is. Care to share? Perhaps there is a better way to do it.

asp.net profile provider

I know there are already some questions on this topic on the site...
I am just trying to understand if it's safe to use ASP.NET Profile Provider with a website with huge traffic?
The way I see it, it's laid out inefficiently. You store property name (which is a string) and property value (which is a string too). If you are just trying to store even age in the profile, you are unnecessarily storing the string "age" in the database over and over whereas with a self-created table, you could just add a column titled age, and no redundancy?
(I am just trying to make sure I am not missing something about it, because I am fairly new to it.)
The profile provider uses an EAV (Entity-Attribute-Value) design deliberately, because profiles in general very commonly have a sparsely populated schema - that is, there are many potential attributes, but only a few will be used for a given single entity, and the few that are used varies widely from one entity to the next.
Let's use a totally arbitrary example - let's say only one in 10 of your users want to provide their age. Making that a column now seems more like a waste, no?
But what if your application makes age mandatory? OK, that column gets populated for everyone. But what if you need to make a note in the profile "user doesn't want to see this obscure dialog anymore". Do you really want a column for every single dialog in your application whether a user wants to see it? Probably not. When you get into the little one-off details of an application of any significant scope, EAV actually becomes the more economical choice.
In the general, it scales quite well (far better than you probably think). In the specific, it doesn't matter - as always, use what works and fix performance problems when they come up. Whatever the scalability limitations of the profile provider are, you'll know when you hit them. I guarantee two things - (1) you'll have to fix a lot of other performance problems you didn't expect before you have to fix that; and (2) if your site is getting enough traffic to break the profile provider, it's a good problem to have.
I agree with Rex M, unless you have a need to do things like sort all your users by age or do other procedures with aggregate profile data. Then you could consider rolling your own. But for just storing properties that you access here and there on a user-by-user basis, Rex M is right.
I do know what you mean. Wouldn't it make sense to supplment the profile provider's table with another table that has columns with mandatory fields? or do you think the overhead of join would not make it not worth it?

Resources