I am using coverity for java static code analysis, I need to add some custom rules so that scan happens according to the custom rule set.
Yes, you can write custom rules with Coverity. There are two APIs you can use:
"Extend" is the older API. Extend rules are written in C++ (regardless of what language you are scanning).
"CodeXM" is the newer API. CodeXM is a domain-specific language designed for writing static analysis rules.
Both APIs are explained in the product documentation, although that is not publicly available. My recollection is both APIs support the same set of scanned languages, specifically, C, C++, Java, and Javascript.
There are a couple Synopsys blog posts about CodeXM that might help you get started:
Getting started with writing checkers using CodeXM
Let’s write a CodeXM checker (it’s not rocket science!)
Additionally, as noted in an answer to How can we add custom rules for coverity tool?, sometimes the customization you want to do can be accomplished simply by changing the options to existing checkers. (I do not consider this question to be a duplicate of that one because the other question seems to be more about adjusting the behavior of existing checkers, despite its title.)
Disclosure: I'm a former Coverity/Synopsys employee.
Related
When using the translation API, I get a different translation (and worse) than if I use translate.google.com.
I am working on a project for a client, and the client was dissatisfied with the translation and noticed the difference.
Do these two service use different engines? I read that the API uses nmt-mode now, and that translate.google.com already uses the same engine.
Both set to translate from Norwegian to English.
Any more information that can clear this up?
Thanks!
The result differences between the translate.google.com and the Translation API calls are considered as an expected behavior that can be generated due to maintenance tasks and the logic used by the internal processes; However, the engines used for each service seems to be private information.
Based on this, it is normal to get some variances when using the API. I think you can use the model parameter option as an available workaround in case you want to specify which of the available models to use, as well as take a look on the Specifying a model official documentation to get detail information about this alternative.
It's almost about 3 years later and the problem still remains!
So I was trying to translate a dataset with the Google Translate API, but in the end it failed to translate some texts to the target language (in my case, Persian/Farsi). So I decided to check them to see if there's a pattern and maybe translate them using the web version of Google Translate.
As I was doing so, I figured that the web version actually could translate some of those untranslated texts, BUT not all. When trying to find a reason for such behaviour, I found out that most of them were names and not sentences. But as we know, names can easily be written with the target language characters as the translation. But why the API doesn't transform those names while the web version does? This photo will explain everything perhaps:
verified translation
As can be seen, some translations have a badge indicating that the translation has been verified, while some others don't.
So to recap, my guess is that maybe the API is set to only use verified translations, but as for the web version, even unverified translations are allowed since you can edit or report them.
We have a requirement to allow customising our core product and adding additional fields on a per client basis e.g. People entity some client wants to record their favourite colour etc. As far as I know we can't add properties to EF at runtime as it needs classes defined at startup. Each customer has their own database but we are deploying the same solution to all customers with all additional code. We are then detecting which customer they are and running customer specific services etc.
Now the last thing I want is to be forking my project or alternatively adding all fields for all clients. This would seem likely to become a nightmare. Also more often than not the extra fields would only be required in a very limited amount of place. Maybe some reports, couple of screens etc.
I found this article from Jermey Miller http://codebetter.com/jeremymiller/2010/02/16/our-extension-properties-story/ describing how they are adding extension properties and having them go from domain to the web front end.
Has anyone else implemented anything similar using EF? How did it work out? Are there any blogs/samples that anyone has seen? I am not sure if I am searching for the right thing even if someone could tell me the generic name for what we want to do that would help. I'm guessing it is a problem that comes up for other people.
Linked question still requires some forking or implementing all possible extensions in single solution because you are still creating strongly typed extensions upfront (= you know upfront what extensions customer wants). It is not generally extensible solution. If you want generic extensible solution you must leave strongly typed world and describe extensions as data.
You will need to use some metamodel. Your entity classes will contain only properties used by all customers and navigation property to special extension entity (additional table per every extensible entity) where you will be able to put additional properties as name / value pair (you can add other columns like type, validation, etc. if needed).
This will in general moves part of your model from hardcoded scenario to configuration based scenario and your customers will even be allowed to define extensions at runtime (if you implement such feature).
Has any one used WF Rules engine outside workflow activities WITHOUT using Rules Editor or CodeDOM?
Scenario
I am trying to use the Rule engine that ships with Workflow foundation classes with .Net Framework for a web based application. We evaluate hundreds of Validation Rules before proceeding towards a calculation engine for a complex calculation.
I have gone through many blogs which state how to use Rules engine without having to have Workflow Activities including https://github.com/geersch/WorkflowRulesEngine.
However they all eventually end up using a Rule Set Editor for defining rules. I want to declaratively define rules in an XML file
What I am looking for
I need to be able to specify Rules in simple XML without dealing with CODEDOM. CodeDOM seems to be very complex to write.
I should not be dependent on RuleSet Editor for defining rules.
Any thoughts any one ?
Don't know what kind of rules you are evaluating, but if you search a good open source rule engine, here is one.
Have a look at the following article for a strategy (not a solution) to approach the problem.
http://code.msdn.microsoft.com/windowsdesktop/Creating-Rules-Using-the-23c5d561
The serialization format for WF Rules is CodeDom.
It does not have another serialization format built-in.
If you want an alternate XML representation, you would need to create a tool to map between the representations.
I'm looking at requiring my team to document their code more thoroughly for some major upcoming projects and to make life a little less painful, I am steering towards XML documentation generators such as Sandcastle, Doxygen or Box Live Documenter.
What are the key considerations I should keep in mind when evaluating the best option and what experiences have led you to a particular decision?
For me the key considerations would be:
Fully automated: Can it be set up in such a way so that pretty much
no outside work is required to
create or edit the documentation.
Fully styled: Can the documentation be fully styled so
that it looks great in a wiki or pdf
after it’s generated. I should be
able to change colors, font sizes,
layouts, etc.
Good Filtering: Can I select only the items I want to be
generated. I should be able to
filter the namespaces, file types,
classes, etc.
Customization: Can I include headers, footers, custom elements,
etc.
I found Doxygen could do all of this. Our workflow is as follows:
Developer makes a change to the code
They update the documentation tags right above the code they just changed
We click a generate button
Doxygen will then extract all the XML documentation from the code, filter it to only include the classes and methods we want, and apply the CSS styling we’ve pre-made for it. Our end result is an internal wiki that looks the way we want, and doesn’t require editing.
Extra: We have all our projects in various git repositories. We pull all these down to one root folder and generate the docs form this root folder..
Would be interested to know how others are automating even further..?
Who is paying for the documentation and why? (is the system stable enough, does it add enough value)
Who is going to read it, and why is she not using a more effective communication channel?
(if correct mostly distance in time/place)
Who is going to keep it up to date.
When are you going to destroy it? (Automatically if it hasn't been read or updated in the past three months?)
I mostly prefer better code to make my life less painful, over more documentation, but I like scenario & unit tests and a high level architecture description.
[edit] Documentation costs time and money to write and keep up to date. JavaDoc style documentation has a serious detrimental effect on the amount of code simultaneously visible and might be a good idea for the developers using the code, but not for those writing it.
I'm conducting a project in which a website should have multi-language support.
Now, this website is supposed to serve about 500K+ visitors a day, so it must be super-efficient.
I've created a table of parameters {[ID],[Name]} AND a linkage-table {[objectID],[parameterID],[languageID],[value]}. I think it's the best way to deploy multi-language support while having the privilege to translate different parameters for each language.
As far as I know, server's memory is much faster than a physical HDD. Therefore, I'm planning to store ASP.NET Application State objects for my translation architecture.
(http://msdn.microsoft.com/en-us/library/ms178594.aspx)
How does my plan sound so far? any suggestions?
If you are planning on making an app that support multiple languages, your instant reflex should be let .net do the work for you. What i'm reading in your question is that you are setting up something to support that. You should know that localization is the way to go when you want to develop a multi-language environment.
Take a look at this msdn article, it should give you a general idea on the topic.
So, localizing an application can be divided into two parts:
Localizing business logic entities.
Localizing everything else.
In the question I see words which are related to business entity localization. For that purpose I agree with the concept to have separation between entities and their localizations.
Part 1 - Localizing entities:
Personally I do this way in database:
table Entity {EntityID, Name} -this is the entity-related table.
table EntityByLang {EntityID, LanguageID, Name} -this is the localized version of the table for each supported language.
This way allows me to have default values for each localizable property like Name and its localization, if such is available in the localized table. What's left here up to you is - you need to implement the data-access-layer which takes the Name localized for the current user language, or the default value (if language or the translation is not available for the given language).
Part 2 - Localizing everything else:
Here, with no alternatives in terms of the performance, I would recommend using some kind of static resources. Personally I live with static resources available for standard asp.net applications.
From the architectural point of view, don't directly refer to localization code from your UI code, like this (which I don't like):
var translation = HttpContext.Current.GetGlobalResourceObject("hello");
//excuse me, if I don't exactly remember the GetGlobalResourceObject() method name...
Instead, I would recommend using this kind of approach:
var translation = AppContext.GetLocalizationService().Translate("hello");
Where: AppContext - some kind of facade/factory (in fact, implementation of abstract facade/factory). GetLocalizationService - initially returns some kind of ILocalizationService, when implemented it returns StaticResLocalizationService (which implements ILocalizationService). This way allowing switching from one kind of localization to another. And particularly StaticResLocalizationService works with asp.net static resources
Sorry for messy sample codes, but I hope you understand my approach.
I hope this helps!
I would suggest to create custom resource provider, you can read more here:
http://msdn.microsoft.com/en-us/library/aa905797.aspx
with this model you can leverage existing asp .net localization functionality