Report - Reload Data when using multiple Schema - iccube

I've created a report mixing multiple schema/cube.
One schema contains near realtime data that should be reloaded frequently while others changes on daily basis.
If i set a refresh period in the general layout settings ALL the schema are going to be reloaded; this forces to redraw MDX filters and graphs not interested on the "RealTime schema" that has changed.
I'm trying to force a reload of the MDX query of specific graphs using javascript. Firing an event on the "Do Refresh Query" but i never see any GVI request for reloading data.
How to programmatically tell the report to reload ONLY a single schema or just not cache existing data and request always new on the server?

The reporting does not force the server to reload a schema.
The "Refresh Period" (in the report) is defining a polling interval; i.e., every "Refresh Period" the report is asking the server about new data. In case new data is available then the report is updating each widget sending new requests.
In case of several schemas are using in the report, the polling request is based on the "default" schema.
As of icCube 6.8x, in case new data is available all widgets are refreshed. Preventing a widget to be refreshed is not supported (contact icCube support).
Hope that helps.

Related

Report Server Reports Hanging

I'm working on a issue with heavily fragmented indexes on a large production DB. I've pretty much identified the indexes that are heavily fragmented, including those that are not really being used. I plan to rebuild some and remove others. So my next step is to devise a before and after time test.
One of the symptoms of this is SSRS reports taking about an hour to render. I'm new to Reports Services. I can see that a report is being embedded in the ASPX page using a ReportViewer control with the ServerPort ReportPath and ReportServerUrl properties set. My problem is trying to figure out how to time the display of the report from start to finish in the code-behind. I can write the start time to a file in the Page_Load but I can't figure out how to record the end time... Pre-Render could just hang and I'm not sure if this is the only page lifecycle event I can tap into to record this. Should I use a Windows Service, and if so, how would I trigger/record the start and end times that way?
I'd really appreciate some feedback on if this is possible via the display page's code-behind.
Have you tried looking in the Reporting Services execution logs. That contains several timed events such as data retrieval time, render time, process time and the actual start and end time. Check ReportServer.dbo.Executionlog and ReportServer.dbo.Catalog
To check the log settings. Connect to your SSRS server using SQL Server Management Studio (not the database engine, select Reporting Services from the connection dialogue). Once connected, right-click the server and choose properties. On the logging tab you will see the number of days history to retain. By default this is 60 days.
Assuming that is no zero then you can do a simple query like this to get the report execution details.
SELECT *
FROM ReportServer..ExecutionLog e
JOIN ReportServer..Catalog c
ON e.ReportID = c.ItemID
WHERE c.name = 'myReportName'

Background task in ASP.NET

I am writing a web application using ASP.NET (not MVC), with .NET v4 (not v4.5).
I fetch some of the data which I must display from a 3rd-party web service, one of whose methods takes a long time (several seconds) to complete. The information to be fetched/prefetched varies depending on the users' initial requests (because different users ask for details about different objects).
In a single-user desktop application, I might:
Display my UI as quickly as possible
Have a non-UI background task to fetch the information in advance
Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it
To do something similar using ASP.NET, I guessed I can:
Use a BackgroundWorker, passing the Session instance as a parameter to the worker
On completion of the worker's task, write fetched data to the Session
If the user's request for data arrives before the task is complete, then block until it it has completed
Do you foresee problems, can you suggest improvements?
[There are other questions on StackOverflow about ASP.NET and background tasks, but these all seem to be about fetching and updating global application data, not session-specific data.]
Why not use same discipline as in a desktop application:
Load the page without the data from the service ( = Display my UI as quickly as possible)
Fetch the service data using an ajax call (= Have a non-UI background task to fetch the information in advance)
this is actually the same, although you can show an animated gif indicating you are still in progress... (Therefore hope have an already-fetched/cached version of the data, by the time the user drills down into the UI to request it)
In order to post an example code it will be helpful to know if you are using jquery? plain javascript? something else? no javascript?
Edit
I am not sure if this was your plan but Another idea is to fetch the data on server side as well, and cache the data for future requests.
In this case the stages will be:
Get a request.
is the service data cached?
2.a. yes? post page with full data.
2.b. no? post page without service data.
2.b.i. On server side fetch service data and cache it for future requests.
2.b.ii. On client side fetch service data and cache it for current session.
Edit 2:
Bare in mind that the down side of this discipline is that in case the method you fetch the data changes, you will have to remember to modify it both on server and client side.

Why is viewstate serialized to a hidden field in the form and not kept on the server?

I am quite new to WebForms and I'm trying to understand the ViewState. AFAIK today, it keeps modifications of the UI over postbacks to the same page. But why does it send the state (= stored modifications) to the client and does not keep it on the server saving CPU cycles and bandwidth?
Am I understanding something completely wrong?
The view state is something intrinsically connected to the view, as the name implies, and trying to manage it separately while maintaining that relation is not something that is easily accomplished.
You would need to store view state per page, so you would still have to send to the client an ID in order to be able to get the correct view state on a postback. Another serious issue is that you send a page to the client but you don't know when or if the client is going to postback that page to the server, so you would need to store view state at least until the session expires.
This could lead to a waste of server resources, as all those view states are being stored for users that may never postback to the server. If you keep your view state slim you'll agree that the best place to store it is to send it with view.
Finally, if you're still not happy with the view state on the client you can override the SavePageStateToPersistenceMedium and LoadPageStateFromPersistenceMedium methods of the page and save it to another medium. I've already heard many people complain about view state on the client, and most time I just tell them to go ahead and implement persistence to another medium on the server... however, I believe no one ever did, probably because it's complicated and you'll end up with a solution that's not that clean.
ViewState is used when a page performs a post back in order to restore the control tree of the page to what is was when the page was last rendered.
This allows for instance a GridView control to keep it's state (what is shown in the GridView) on post back without having to rebind it to the same data.
The reason why the ViewState per default is serialized and sent to the client is (I guess) that it's the easiest way to get it back when the client performs a post back.
What if for instance a user has several browser windows open with the same page loaded and you have the viewstate stored in the Session? Assigning the correct viewstate to the different windows in such a case can of course be solved, but having the client explicitly post it seems to be the easiest way.
That said, it is possible to have the viewstate stored in the Session. See for instance this link.
Other possibilities are available by implementing your own System.Web.UI.PageStatePersister.

ASP.NET feedback during long submit

This is probably a really simple thing. Basically, the user clicks a button and a potentially long running task happens. I'd like to do a few things like toggle the button's enabled state, show a spinner, etc. In VB.NET Winforms I'd just do Application.DoEvents() and the updates would happen and the code can continue. How can I do this in ASP.NET? (preferable serverside or minimal javascript)
There are a few ways to approach this in ASP.Net depending on exactly what your requirement is. Getting the process started is fairly easy, but monitoring it for completion from the client side will require more work in both the server and the client.
The basic outline of the solution is:
1) Perform some action on the client that initiates the action. For example, you could post the entire page back on a button click, initiate an ajax request, or have a partial page postback depending on how much information you need from the page.
2) On the server side, initiate the task using a BackgroundWorker, make an entry in a workflow queue, or store a request in a database table that is monitored by a service that is responsible for performing the action.
3) Back on the client side, use javascript start a window.timeout loop that, when it times out, issues an ajax request to the web server to check on the completion. Using a timeout loop like this will ensure that the UI remains responsive and that any animations being displayed will display correctly. How you check on the completion will depend on how your server-side implementation is designed, but will almost certainly require a database.
We use the following general approach for initiating reports from the web client, which can be long running:
When the user initiates the report, open a new window to the report generation page on the client using javascript, passing the page enough parameters to get it started. Opening a separate window allows the user to continue working, but still see that there is something happening.
The user interface for the report page basically contains an animated GIF so that the user knows that something is going on.
When the report page is initially loaded on the server, it generates a unique id for monitoring the status of the report and embeds this in javascript for use in monitoring the status. It then stores this unique identifier in a database table that contains the unique id and a status column, initializing the status to requested.
Once the database entry has been made, the page fires off a BackgroundWorker to initiate the action and then returns the page to the user.
When the page is displayed, javascript starts a window.timeout loop that periodically fires off an ajax request to the web server, which then checks the database for the status of the report using the unique identifier created earlier.
When the backgroundworker finishes the report, either successfully or in failure, it updates the database table with the status, location of the report or error messages and terminates.
When the javascript loop finds that the report generation has completed, it either displays the error message or the report to the user.
Hopefully, this gives you some ideas for solving your issue.
The issue with this could be that once the page is posting you can't update other sections of the page.
you can use multiple asp:updatepanel and communicate to other update panel's causing the state to change in the panel.
take a look at this link:
http://www.ajaxtutorials.com/ajax-tutorials/tutorials-using-multiple-updatepanels-in-net-3-5-and-vb/
it will show you how to accomplish this.

How long should I cache an object which can be changed at any time?

I'm in the process of making a fairly complex application. It is expected to run across multiple web servers, otherwise this would be easy.
Basically, we have a set of Client records. Each Client record has an XML column which contains all of the "real" data, such as the clients name and other fields which are made dynamically. Our users can update a client's record at anytime. Also, we have Application records. Each application is tied to multiple clients. Each application is usually tied to more than 3 clients. Each client's XML data is greater than 5k of text, usually.
In some profiling I've done, obtaining and deserializing this XML data is a fairly expensive operation. At one portion of our web application, we must have very low latencies (related). So during this portion, our web application is a JSON web service. When a request is made to it, usually, every client record will be needed(in full, due to how it's currently coded). I'm attempting to make as few database hits as possible in this portion.
How long should I cache the Client records' XML objects? Knowing the user can change it at anytime, I'm not sure if I should cache it at all, but can users live with slightly stale data?
Instead of refreshing the cache on any kind of schedule, just compare the last modified date of any critical records with the cached value when accessed, which should be a very inexpensive operation. Then update the cache only when needed.
You could store a hash of the xml in the database that the clients validate their cached XML against.
Then if it doesn't match up, invalidate your cache and retrieve new.
When the XML is updated, update the hash with it and then your clients will notice and update their cache.
Maybe you should use an SqlCacheDependency to ensure the data removed from the cache and reloaded from the database whenever it was changed.

Resources