A data provider is giving us dynamic report data in a SQL Server database table X. There is also a metadata table Y which holds the report count and the columns (as a semicolon separated string). In the report data table X, there is a text field which holds all the data, separated by semicolon. The provider is doing this to be dynamic, and I can't influence that choice.
I need to:
Load the metadata from Y
Load the data from X for a selected report from Y
Display the data in a table on a webpage
How would you go about reading this in the model/controller and displaying it in a webgrid/table? For models with fixed columns, this is simple, but what about when the columns are dynamic?
Current solution (feels dirty) is to parse the data into a DataTable and manually output rows and table cells in the view from this object. No use of WebGrid, MvcContrib Grid etc.
I chose to parse the table data into a specific dimensional model in my MVC application:
Base objects
Report
Column Definition
Data Row
Report has a list of column definitions and a row count integer. It also has a list of rows. Each row has a list of column values (cells), which is always treated as string in this code, and only displayed using the display data type from the column definition.
My data types defined are for example: text, date, number, link (many types of links to our CRM system, accountlink, userlink, orderlink). I add new data types only if I need to display them differently than the existing ones. I can imagine some day I need a chart data type (where the cell data is a list of plot points for example).
This makes the report definition very flexible, but I am probably sacrificing some performance and it is purely custom. I would still like input on this approach, but no responses in the last 6-7 months probably means this is a weird enough scenario that you, the reader, should avoid.
Related
I am new to blue prism. I have a scenario where I am giving input (passengers details for traveling) to a travel portal and based on the input its generating a booking reference number, total cost etc. Now I want to read all the outputs into a collection but the problem is data is not tabular (cant use Get Table in read component). Its just the details of travel which are populating into textboxes. Please find attached the screen shot to have more clarity on this.
How to achieve this? Any leads will be appreciated.
Based on the screenshot you've provided, this is part of the Blue Prism Advanced Consolidation Exercise ("BPTravel").
"Get Table" won't work on this data because it is not a table. As you've mentioned, the data is presented in a series of textboxes.
The way to tabularize this data would be to create a Collection in your Process and manually define each of the Field Names in the collection, then read each text field in individually to the correct column in the collection.
Read each text box data into data item. Create a named collection (i.e Collection with pre-defined column name). Loop through the collection.column_name(You will be getting column name as collection by using Utility - Collection Manipulation action and get the column names) and first add a row to collection and assign values to collection fields
I have a DataTable that I'm passing to a FlexCel report. It contains a variable number of columns, so I'm using the Full Dataset feature (e.g. <#table_name.*>).
However, only a subset of the fields are dynamically generated (I have a variable number of attachments). The column name for each attachment field starts with a common word (e.g. "Attachment0", "Attachment1", etc).
What I would like to do is output the known finite set of fields and then the variable number of attachments. It would be nice if I could write something like <#table_name.Attachment*> (and <#table_name.Attachment**>). Is there any way in FlexCel Reports I can achieve the same result?
A side benefit to such a solution means that I could keep the formatting for the known/finite set of fields.
Update
I added place holder columns to the document, each with a <#delete column> tag, so that the un-wanted columns/data are removed.
Although this works, it's not ideal. For example, if I want to see how the columns fit in the page width (in print preview), then I need to hide the columns. Then I have to remember to un-hide them again, so other developers can see/understand my handy work.
It would be much more straight forward if I could filter the fields before they're output to the document.
I realised there's an alternate way around this problem. I broke up the data into two sets of data - <#table_name.*> and <#table_name_attachments.*>.
The fixed set of fields are in the first table and the variable set of fields is in the second table (all the "Attachment*" fields). When the report is run, I place them next to each other (in the same order) in the same worksheet. This means I have two table ranges - "_table_name_" and "_table_name_attachments_" on the one sheet.
Now I'm able to run my print preview without hiding/re-showing the columns-to-be-deleted. I've also eliminated human error - it was all to easy to accidentally set the wrong number of padded/delete columns.
I try to add a new calculated column to sharepoint list that will show elapsed day. I enter name and write a formula like;
=ABS(ROUND(Today-Created;0))
The data type returned from this formula is: Single line of text
When I want to save I get an error like
Calculated columns cannot contain volatile functions like Today and
Me.
Calculated Column Values Only Recalculate As Needed
The values in SharePoint columns--even in calculated columns--are stored in SharePoint's underlying SQL Server database.
The calculations in calculated columns are not performed upon page load; rather, they are recalculated only whenever an item is changed (in which case the formula is recalculated just for that specific item), or whenever the column formula is changed (in which case the formula is recalculated for all items).
(As a side note, this is the reason why in SharePoint 2010 you cannot create or change a calculated column on a list that has more than the list view threshold of 5000 items; it would require a mass update of values in all those items, which could impact database performance.)
Thus, in order for calculated columns to accurately store "volatile" values like "Me" and "Today", SharePoint would need to somehow constantly recalculate those column values and continuously update the column values in the database. This simply isn't possible.
Alternatives to Calculated Columns
I suggest taking a different approach entirely instead of using a calculated column for this purpose.
Conditional Formatting: You can apply conditional formatting to highlight records that meet certain criteria. This can be done using SharePoint Designer or HTML/JavaScript.
Filtered List views: Since views of lists are queried and generated in real time, you can use volatile values in list view filters. You can set up a list view web part that only shows items where Created is equal to [Today]. Since you can place multiple list view web parts on one page, you could have one section for today's items, and another web part for all the other items, giving you a visual separation.
A workflow, timer job, or scheduled task: You can use a repeating process to set the value of a normal (non-calculated) column on a daily basis. You need to be careful with this approach to ensure good performance; you wouldn't want it to query for and update every item in the list if the list has surpassed the list view threshold, for example.
I found some conversations about this issue. Many people suggest to creating a new Date Time column, visible is false, default value is Today's Date and it will be named as Today. Then we can use this column in our formulas.
I tried this suggestion and yes error is gone and formula is accepted but calculated columns' values are wrong. I setted column Today is visible and checked, it was empty. Default value Today's Date was not working. When I looking for a solution for this issue I deleted column Today carelessly. Then I realized calculated columns' values are right.
Finally; I don't know what is trick but before using Today keyword in your formulas if you create a column named as Today and after your formula saving if you delete Today column, it is working.
UPDATE
After #Thriggle's answer I realized this approach doesn't work like a charm. Yes, formula doesn't cause an error when calculated column saving but it works correctly only first time, in the next day the calculated column shows old values, because its values are static as Thriggle explained.
I’m trying to create a dashboard filter in Tableau. All but one of my graphs have the same primary data source A. The filter will affect all these graphs as intended. However I have one sheet where the primary data source is B, and the secondary data source is A. I can’t get this particular graph to link to the quick filter I’ve created. Does anyone know of a workaround for this?
The easiest way to filter multiple data sources from a single user control is to use a parameter along with calculated fields in each data source that reference the parameter setting. The calculated fields can then be put on the filter shelf for the appropriate worksheets.
This solution doesn't fit every circumstance.
Parameters can only have a single value, and the list of arbitrary values must either be defined statically in the workbook or allow the user to enter an arbitrary value. You can't dynamically lookup the list of legal parameter values in a database table (although you can use a field to populate the list initially).
Parameters are independent of any data source.
So if these restrictions don't hamper your use case, then you can have one parameter control on a dashboard that influences the filters applied to many worksheets. The simplest calculated field used for filtering could just say [My_Field] = [My_Parameter]. You can allow extend this idea to define parameter values that reference multiple choices like: "A", "B", "A and B" and then adjust your calculated fields accordingly. At some point, this approach gets unwieldy.
Another approach is use a worksheet as a filter, by displaying marks for each option, and then using filter actions to use the selected marks to filter other worksheets. This approach allows multiple selection, and dynamically loading choices from a database table.
I'm struggling to decide what database schema to use. One large table, or many small (though more difficult to manage).
I have 10 templates each with their own text fields. I am trying to store the text for the templates in a database and then when the web page is called I will show the correct text in the html template. Because a mixture of these templates are to be in a sequence of screens where you can navigate backwards or forwards, I need to be able to sequence them, I can only think of adding a page_number column. I also would like to re-order them and delete them as necessary using the page_number column.
I was planning to do all this in a web application without the need for a standard folder/web page structure, like a small CMS system.
option 1,
I can create one large table with many columns, lot's of which will be empty, over half with each row. Is this bad?
option 2,
I could create many tables using only the relevant template columns required.
The problem I see with this, is the headache of repopulating a column in each table when I delete a row, because I need to re-sequence a column that represents page numbers. Which I reduce if I use one large table.
I've thought of moving page numbers into another table called page_order but I cannot think of a way to maintain an effective relationship between the other tables if I make changes.
I'm yet to figure out how to re-sequence a column in a database when a row is deleted. Surely this is a common problem!?
Thanks for taking the time to help!
Have one table that contains one row per template. It might look like:
id (INT, auto-increment)
page_order (INT, unique key here, so pages cannot have the same number)
field1 (STRING, name of the text field)
value1 (STRING, contents of the text field)
field2
value2
Then you have to decide the maximum fields that any page can have (N) and keep adding field/value columns up to N.
The advantage of this is you have one table that isn't sparsely populated (as long as the templates have about the same number of fields, even if the names of those fields are different).
If you want to make an improvement to his (maybe not necessary for a small amount of data) you could change field to an INT id and connect it to a lookup table that contains (field_id, field_name).