What do you advice to where to store computed values? Example: API serves to me product property as union of 1 | 2 | 3. In app this is interpreted as normal | premium | platinum. When user changes this, to API goes as 1 | 2 | 3. What is the most reliable way to store? As original API values (and use selectors or similar and compute) or values which I use on front?
You are displaying normal | premium | platinum as a visual representation of the actual numeric API values. Since you will need the numeric values again, translating them back and forth bears the risk of values of introducing errors, e.g. when a new state is added by another developer.
My recommendation is: Internally work with the numeric values. If it comes to showing them to the user, use a selector like selectNameForProduct that will give you the strings. Then you have only one place where you need to deal with an maintain the strings.
Related
I am recording the data for 3 counters and have the choice of using either of following schema:
Date|Sensor|Value
Date|Sensor1Value|Sensor2Value|Sensor3Value
When visualizing using either of the above schemas, x-axis will be the date. In case of 1st schema the sensor will be the legend and value will be the y-axis.
Whereas in case of the 2nd schema, each column will need to be added as y-axis, and there will be no legend.
What amongst the above 2 schemas are better suited for reporting (plotting graphs)?
The best answer will depend on 3 things:
the type of visualizations you're trying to build
which visualization(s) tool you're planning to use and
if you plan to add more sensor values in the future
Essentially, you're either going to pivot your data when storing it (second schema model with 1 column for each value) or you're going to store the data and rely on the visualization or your database query to perform the pivot logic.
In my experience in working with BI and analytics tools, it's almost always better to store data using the first Model (Date | Sensor | Value). This will provide the most flexibility when it comes to visualization tools and also if you need to add future sensor values, you won't need to modify your database table structure. If you need to convert your data into the second model, you can always build a View or temp table that uses a dynamic pivot query.
I have started to use Selenium + BDD Cucumber and I find these two technologies working quite nicely together. I am a bit concerned regarding the approach to what can be improved much more if Cucumber was offering BeforeAll and AfterAll annotations to achieve faster verification on specific areas with a more granularity. Say for example I wanted to write this scenario (I am being generic on purpose just to show my point)
Scenario Outline: The customer can update their details
Given I am logged in the platform
And I navigate to my details page
When I update "field" in my details
And Save
I should see the "field" updated
Example:
|field |
|name |
|surname |
|address |
Being a scenario outline that means it's going to be executed 3 times as 3 separate scenarios. The problem I see here is that every time the scenario will start from scratch, giving you the delay of logging in every time (or in general to perform all the actions to get you to the point you want to be).
One can use #Before but doesn't change much because these actions will be executed anyways every time (and also I am not entirely sure that is the right way of doing things in BDD).
Some people are suggesting to alternate the checks for each field in a series of When/Then that seems to be against the BDD basic principles. Also in this specific case it means that the 3 scenarios will be compacted in one and if the customer fails to update the name field, the others will not be executed giving me a pass rate of 0% whilst it could be 66% (assuming for simplicity we only have 1 scenario and the other two fields are successfully updated). Moreover if updating name has a bug as well as updating address I will only be aware of the problem in the first one and understand about the other only when the "updating name" steps succeeds.
A BeforeAll would probably solve the problem but it's not available in Cucumber.
What I would like to achieve is to perform a series of steps to get to as specific page and then perform tests per field (or with a different level of granularity) and execute all of them in a "when I do this, then I should see that..." fashion so that if anything fails I know what fails and what passes but at least I am sure everything has been covered and not skipped because previous steps have failed.
Apologies for the long post but I was hoping to explain my view as best as possible. Not sure if it's clear or makes sense at all.
Thanks for your replies
Limiting this to one scenario per field is being too rigid for BDD. I like to think of a scenario as a distinct behavior coupled to an assertion that ensures the behavior is functioning. It would be appropriate to test that all the fields are being updated, unless a group of fields can be isolated as a separate behavior, for instance changing your email should send a verification. That should be a separate scenario from the fact the email field gets updated.
This is where data tables become nice to use. Specify a data table for the fields you want to update so you can specify multiple fields in a single scenario. You can use a data table in your assertion in order to compare multiple fields at once.
Scenario: The customer can update their details
Given I am logged in the platform
And I navigate to my details page
When I update my details with:
| Field | Value |
| Name | Bob |
| Surname | Jr |
| Address | 123 Smith St |
And Save
Then my details should be:
| Field | Value |
| Name | Bob |
| Surname | Jr |
| Address | 123 Smith St |
Good day everyone,
I have some questions, about how to do calculations of data stored in the database. Like, I have a table:
| ID | Item name | quantity of items | item price | date |
and for example i have stored 10000 records.
First that I need to do is to pick up items from a date interval, so I wont need the whole database for my calculations. And then I get items from that date interval, I have to add some tables, for example to calculate:
full price = quantity of items * item price
and store them in new table for each item. So the database for the items picked from the date interval should look like this:
| ID | Item name | quantity of items | item price | date | full price |
The point is that I don't know how to store that items which i picked with date interval. Like, do i have create some temporary table, or something?
This will be using an ASP.NET web application, and for calculations in the database I think I will use SQL queries. Maybe there is an easier way to do it? Thank you for your time to help me.
Like other people have said, you can perform these queries on the fly rather than store them.
However, to answer your question, a query like this should do the trick..
I haven't tested this so the syntax might be off a touch, though it will get you on the right track.
Ultimately you have to do an insert with a select
insert into itemFullPrice
select id, itemname, itemqty, itemprice, [date], itemqty*itemprice as fullprice from items where [date] between '2012/10/01' AND '2012/11/01'
again..don't shoot me if i have got the syntax a little off.. it's a busy day today :D
Having 10000 records, it'd not be a good idea to use temporary tables.
You'd better have another table, called ProductsPriceHistory, where you peridodically calculate and store, let's say, monthly reports.
This way, your reports would be faster and you wouldn't have to make calculations everytime you want to get your report.
Be aware this approach is OK if your date intervals are fixed, I mean, monthly, quarterly, yearly, etc.
If your date intervals are dynamic, ex. from 10/20/2011 to 10/25/2011, from 05/20/2011 to 08/13/2011, etc, this approach wouldn't work.
Another approach is to make calculations on ASP.Net.
Even with 10000 records, your best bet is to calculate something like this on the fly. This is what structured databases were designed to do.
For instance:
SELECT [quantity of items] * [item price] AS [full price]
, [MyTable].*
FROM [MyTable]
More complex calculations that involve JOINs to 3 or more tables and thousands of records might lend itself to storing values.
There are few approaches:
use sql query to calculate that on the fly - this way nothing is stored to the database
use same or another table to perform calculation
use calculated field
If you have low database load (few queries per minute, few thousands of rows per fetch) then use first aproach.
If calculation on the fly performs poorly (millions of records, x fetches per second...) try second or third aproach.
Third one is ok if your db supports calculated and persisted fields, say MSSQL Server.
EDIT:
Typically, as others said, you will perform calculation in your query. That is, as long as your project is simple enough.
First, when the table where you store all the items and their prices becomes attacked with insert/update/deletes from multiple clients, you don't want to block or be blocked by others. You have to understand that e.g. table X update will possibly block your select from table X until it is finished (look up page/row lock). This means that you are going to love parallel denormalized structure (table with product and the calculated stuff along with it). This is where e.g. reporting comes into play.
Second, when calculation is simple enough (a*b) and done over not-so-many records, then it's ok. When you have e.g. 10M records and you have to correlate each row with several other rows and do some aggregation over some groups, there is a chance that calculated/persisted field will save your time - you can gain up to 10-100 times faster result using this approach.
You should separate concerns in your application:
aspx pages for presentation
sql server for data persistency
some kind of intermediate "business" layer for extra logic like fullprice = p * q
E.g. if you are using Linq-2-sql for data retrieval, it is very simple to add a the fullprice to your entities. The same for entity framework. Also, if you want, you can already do the computation of p*q in the SQL select. Only if performance really becomes an issue, you can start thinking about temporary tables, views with clustered indexes etc.
I am building an application in ASP.NET, C#, MVC3 and SQL Server 2008.
A form is presented to a user to fill out (name, email, address, etc).
I would like to allow the admin of the application to add extra, dynamic questions to this form.
The amount of extra questions and the type of data returned will vary.
For instance, the admin could add 0, 1 or more of the following types of questions:
Have you a full, clean driving liscence?
Rate your drivings skills from 1 to 5.
Describe the last time you went on a long journey?
etc ...
Note, that the answers provided could be binary (Q.1), integer (Q.2) or free text (Q.3).
What is the best way of storing random data like this in MS SQL?
Any help would be greatly appriecated.
Thanks in advance.
I would create a table with the following columns and store the name of the variable along with value in the appropriate column with all other values null.
id:int (primary)
name:varchar(100)
value_bool:bit(nullable)
value_int:int (nullable)
value_text:varchar(100) (nullable)
Unless space is an issue, I would use VARCHAR(MAX). It gives you up to 8,000 characters and stores numbers and text.
edit: Actually as Aaron points out below, that will give you 2 billion characters (enough for a book). You might go with VARCHAR(8000) or the like then, wich does give you up to 8,000 characters. Since it is VARCHAR, it will not take empty space (so a 0 or 1 will not take up 8,000 characters worth of space, only 1).
The web form I'm working on right now is the electronic version of a contract. The users want to have default values for a large number of the fields to keep wording consistent. However, they also want to be able to enter a custom value or select multiple values for some of the fields. I'm finding that the presentation layer is bleeding into the backend quite heavily and wondering if anyone has some tips on how to go about designing an application like this?
EDIT: I wanted to try and keep from going into the specifics because there is a large amount of business logic in it. But basically I have a form with about 20 fields in it. 3 of the fields have select boxes with multiple options in them. These are the default values I was talking about. But the user also wants to be able to add a "one off" type value to the select. This represents a specific term in the contract that isn't used enough to be valuable as a default. My issue is that I'm storing the default values in the database because the users want to be able to add and remove these defaults at will. Its not just a standard data capture screen.
tblRecord (RecordID, SomeFieldID, CustomText)
tlkpSomeField (SomeFieldID, SomeFieldText)
It sucks, but that is actually a pretty common solution. You can use the CustomText only when the Record shows that SomeFieldID is null. Your data layer will abstract all of that away, so it will be clean. You can also store your default value as the first value in the tlkp table.
tlkpSomeField
1 Default Value
2 value1
3 value2
4 ....
If I understand you correctly, without knowing you or your domain, you have problems with storing both default and chosen value, you sort of get redundant data in the database.
It feels wierd to store the default flag with the chosen item with the freetext when lots of it is just user stuff. Sort of.
My thoughts/recommendations are:
Default value doesn't have any business value so don't store it as users' choice. Just store what the user chose/freetext.
Or does it Mean something that the user chose default? Then there is business value to it and you should store the value/text the user chose and a tick that it happened to be the default value.
Or I might have misunderstood you totally...