I'm not quite sure how to proceed. I have an ASP.NET MVC 5 web application that has identifiers in the URL as in the following:
/Controller/Action/XX0000001X/123456
Where XX0000001X and 123456 are identifiers to the record. Now, what I need to have happen is the web performance test not hard-code these in the test itself, because the next time it runs it will cause an issue, since these are generated everytime.
Is there a way to allow any value in the URL within Visual Studio web performance test?
Some ways of getting the values are:
You just know the values. So embed them in the URL, or save them in context parameters. Eg setting CP1 to XX0000001X and CP2 to 123456.
The values are provided in earlier responses. Use an extraction rule to get each value. Probably need two rules, saving to CP1 and CP2 There are several built-in rules but writing your own extraction rule is possible for more complex cases.
Several sets of values are known in advance of the test and each set of values should be used in one test. Data drive with these vales. See How to use a list of values for a parameter? for more details.
Some other code, perhaps in a plugin, generates the values and saves them to context parameters CP1 and CP2.
Having got the required values in context parameters just use them in the url. Set the url (or part of it) to be
Controller/Action/{{CP1}/{{CP2}}
Or, if the values come from a data source then the url should be in the style
Controller/Action/{{DataSource1.FileName#csv.FirstField}}/{{DataSource1.FileName#csv.SecondField}}
Related
If this question is deemed inappropriate because it does not have a specific code question and is more "am I barking up the right tree," please advise me on a better venue.
If not, I'm a full stack .NET Web developer with no SSRS experience and my only knowledge comes from the last 3 sleepless nights. The app my team is working on requires end users to be able to create as many custom dashboards as they would like by creating instances of a dozen or so predefined widget types. Some widgets are as simple as a chart or table, and the user configures the widget to display a subset of possible fields selected from a larger set. We have a few widgets that are composites. The Web client is all angular and consumes a restful Web api.
There are two more requirements, that a reasonable facsimile of each widget can be downloaded as a PDF report upon request or at scheduled times. There are several solutions to this requirement, so I am not looking for alternate solutions. If SSRS would work, it would save us from having to build a scheduler and either find a way to leverage the existing angular templates or to create views based off of them, populate them and convert that to a pdf. What I am looking for is he'll in understanding how report generation best practices and how they interact witg .NET assemblies.
My specfic task is to investige if SSRS can create a report based on a composite widget and either download it as a PDF or schedule it as one, and if so create a POC based on a composite widget that contains 2 line graphs and a table. The PDF versions do not need to be displayed the same way as the UI where the graphs are on the same row and the table is below. I can show each graph on its' own as long as the display order is in reading order. ( left to right, then down to the next line)
An example case could be that the first graph shows the sales of x-boxes over the course of last year. The line graph next to it shows the number of new releases for the X-Box over the course of last year. The report in the table below shows the number of X-box accessories sold last year grouped by accessory type (controller, headset, etc,) and by month, ordered by the total sales amount per month.
The example above would take 3 queries. The queries are unique to that users specific instance of that widget on that specific dashboard. The user can group, choose sort columns and anything else that is applicable.
How these queries are created is not my task (at least not yet.) So there is an assumption that a magic query engine creates and stores these sql queries correctly in the database.
My target database is sql 2012 and its' reporting service. I'm disappointed it only supports the 2.0 clr.
OI have the rough outline of a plan, but given my lack of experience any help with this would be appreciated.
It appears I can use the Soap service for scheduling and management. That's straight forward.
The rest of my plan sounds pretty crazy. Any corrections, guidance and better suggestions would be welcome. Or maybe a different methodology. The report server is a big security hole, and if I can accomplish the requirements by only referencing the reporting names paces please point me in the right direction. If not, this is the process I have cobbled together after 3 days of research and a few msdn simple tutorials. Here goes:
To successfully create the report definition, I will need to reference every possible field in the entire superset available. It isn't clear yet if the superset for a table is the same as the superset for a graph , but for this POC I will assume they are. This way, I will only need a single stored procedure with an input parameter that identifies the correct query, which I will select and execute. The result set will be a small subset of the possible fields, but the stored procedure will return every field, with nulls for each row of the omitted fields so that the report knows about every field. Terrible. I will probably be returning 5 columns with data and 500 full of nulls. There has to be a better way. Thinking about the performance hit is making me queasy, but that was pretty easy. Now I have a deployable report. I have no idea how I would handle summaries. Would they be additional queries that I would just append to the result set? Maybe the magic query engine knows.
Now for some additional ugliness. I have to request the report url with a query string that identifies the correct query. I am guessing I can also set the scheduler up with the correct parameter. But man do I have issues. I could call the url using httpWebRequest for my download, but how exactly does the scheduler work? I would imagine it would create the report in a similar fashion, and I should be able to tell it in what format to render. But for the download I would be streaming html. How would I tell the report server to convert it to a pdf and then stream it as such? Can that be set in the reports definition before deploying it? It has no problem with the conversion when I play around on the report server. But at least I've found a way to secure the report server by accessing it through the Web api.
Then there is the issue of cleaning up the null columns. There are extension points, such as data processing extensions. I think these are almost analogous to a step in the Web page life cycle but not sure exactly or else they would be called events. I would need to find the right one so that I can remove the null data column or labels on a pie chart at null percent, if that doesn't break the report. And I need to do it while it is still rdl. And just maybe if I still haven't found a way, transform the rdl to a pdf and change the content type. It appears I can add .net assemblies at the extension points. But is any of this correct? I am thinking like a developer, not like a seasoned SSRS pro. I'm trying, but any help pushing me in the right direction would be greatly appreciated.
I had tried revising that question a dozen times before asking, and it still seems unintelligible. Maybe my own answer will make my own question clear, and hopefully save someone else having to go through what I did, or at least be a quick dive into SSRS from a developer standpoint.
Creating a typical SSRS report involves (quick 40,000 foot overview)
1. Creating your data connection
2. Creating a SQL query or Queries which can be parameterized.
3. Datasets that the query result will fill
4. Mapping Dataset columns to Report Items; charts, tables, etc.
Then you build the report and deploy it to your report server, where the report can be requested by url with any SQL parameters Values added as a querystring:
http://reportserver/reportfolder/myreport?param1=data
How this works is that an RDL file (Report Definition Language) which is just an XML document with a specific schema is generated. The RDL has two elements that were relevant to me, and . As the names infer, the first contains the queries and the latter contains the graphs, charts, tables, etc. in the report and the mappings to the columns in the dataset.
When the report is requested, it goes through a processing pipeline on the report server. By implementing Interfaces in the reporting services namespace, one could create .NET assemblies that could transform the RDL at various stages in the pipeline.
Reporting Services also has two reporting API's. One for managing reports, and another for rendering. There is also the reportserver control which is a .NET Webforms control which is pretty rich in functionality and could be used to create and render reports without even needing a report server instance. The report files the control could generate were RDLC files, with the C standing for client.
Armed with all of this knowledge, I found several solution paths, but all of them were not optimal for my purposes and I have moved on to a solution that did not involve reporting services or RDL at all. But these may be of use to someone else.
I could transform the RDL file as it went through the pipeline. Not very performant, as this involved writing to the actual physical file, and then removing the modifications after rendering. I was also using SQL Server 2012, which only supported the 2.0/3.5 framework.
Then there were the services. Using either service, I could retrieve an RDL template as a byte array from my application. I wasn't limited by the CLR version here. With the management server, I could modify the RDL and deploy that to the Report Server. I would only need to modify the RDL once, but given the number of files I would need and having to manage them on the remote server, creating file structures by client/user/Dashboard/ReportWidget looked pretty ugly.
Alternatively, I instead of deploying the RDL templates, why not just store them in the database in byte array format. When I needed a specific instance, I could fetch the RDL template, add my queries and mappings to the template and then pass them to the execution service which would then render them. I could then save the resulting RDL in the database. It would be much easier for me to manage there. But now the report server would be useless, I would need my own services for management and to create subscriptions and to mail them I would need a queue service and an SMTP mailer, removing all the extras I would get from the report server, need to write a ton of custom code, and still be bound by RDL. So I would be creating RDLM, RDL mess.
It was the wrong tool for the job, but it was an interesting exercise, I learned more about Reporting Services from every angle, and was paid for most of that time. Maybe a blog post would be a better venue, but then I would need to go into much greater detail.
I have run a quick test of changing the AllXml column of dbo.ELMAH_Error to XML type. Logging inserts and the standard log viewer still function.
This has the benefit of allowing me to create indexes on the AllXml column. However, I am concerned that this may cause an incompatibility with other software that may work in conjunction with ELMAH.
What are the ramifications of doing this?
I am planning to build part of an internal dashboard around SSRS reports. I'm using ASP.net (framework 4), SQL Server 2008 R2, IIS6 and have built all my reports already in Report Builder 3.0.
Now it comes to pulling the reports through from the report server to the dashboard using the ReportViewer control. What I would like to do is to hide the SSRS report parameters and provide them myself from code behind, because the SSRS parameter selection controls are ugly (Windows 95 ugly) and don't fit with the look and feel of the site. Also, I have different end-user reports that are in fact the same report with different parameter selections (and I don't want to show these selection controls).
The issue is that all of my reports are VERY heavily parameterised - the business users want reports that are as flexible as pivots (you can imagine the fun I had building these reports). As a result I use a large number of shared datasets to provide the default and available values for each parameter. Since I intend to present these parameters myself in my web application, I need to know:
What the parameters are for each report (name and type)
What the default values should be for each parameter
What the available options should be for each parameter
I am happy to store the names/types of each parameter in a database table, but there would be far too many values to store to do the same with parameters (plus the data is too dynamic). Can anyone think of a solution?
In fact, it is quite easy to retrieve the Default and Available parameter values used in the report:
Once you have set the report source, ServerReport.GetParameters() returns a collection of ReportParameterInfo which, for each parameter, provides the data type, default/available values ("Values"/"Valid Values" respectively, where "Valid Values" are value/label pairs) and other useful attributes like a list of other parameters that depend on it.
So, just save default/available parameters in the report (using Report Builder / Report Manager) and use this to retrieve them in your code behind.
IS this what you're looking for: http://msdn.microsoft.com/en-us/library/ms155391.aspx ? I'm not quite clear what your issue is from your post, what exactly do you need help with in doing what you're trying to do?
Have a stored procedure that produces a number--let's say 50, that is rendered as an anchor with the number as the text. When the user clicks the number, a popup opens and calls a different stored procedure and shows 50 rows in a html table. The 50 rows are the disaggregation of the number the user clicked. In summary, two different aspx pages and two different stored procedures that need to show the same amount, one amount is the aggregate and the other the disaggregation of the aggregate.
Question, how do I test this code so I know that if the numbers do not match, there is an error somewhere.
Note: This is a simplified example, in reality there are 100s of anchor tags on the page.
This kind of testing falls outside of the standard / code level testing paradigm. Here you are explicitly validating the data and it sounds like you need a utility to achieve this.
There are plenty of environments to do this and approaches you can take, but here's two possible candidates
SQL Management Studio : here you can generate a simply script that can run through the various combinations from the two stored procedures ensuring that the number and rows match up. This will involve some inventive T-SQL but nothing particular taxing. The main advantage of this approach is you'll have bare metal access to the data.
Unit Testing : as mentioned your problem is somewhat outside of the typical testing scenario where you would oridnarily Mock the data and test into your Business Logic. However, that doesn't mean you cannot write the tests (especially if you are doing any Dataset manipulation prior to this processing. Check out this link and this one for various approaches (note: if you're using VS2008 or above, you get the Testing Projects built in from the Professional Version up).
In order to test what happens when the numbers do not match, I would simply change (temporary) one of the stored procedure to return the correct amount +1, or always return zero, etc.
I am building a web application where i am using multilingual support.
I am using variables for label text display, so that administrators can
change a value in one place and that change is reflected throughout the application.
Please suggest which is better/less time consuming for displaying label text?
using relational db interaction.
constant variable.
xml interaction.
How could I find/calculate the processing time of the above three?
'Less time Consuming' is easy, and completely intuitive; constants will always be faster than retrieving the information from any external source, even another in-memory source. Probably even faster than retrieving it from a variable (which is where any of the other solutions would have to end up putting the data)
But I suspect there is more to it than that. If you need to support the ability to change that data (and even if not), you may consider also using Resource Files, which would enable you to replace all such resources based on language/culture.
But you could test the speed fairly easily using the .NET 4 StopWatch class, or the system tickcount (not sure of the object offhand where that comes from) if you don't have 4.0
db interaction, in that case the rate of db-interaction would be increased, unless you apply some cache logic.
Constants, Manageability issues.
XML, parsing time+High rate of IO etc.
Create three unit test for each choice.
Load test them and compare the results.