SOAP API method for SSRS report usage statistics - asp.net

I've developed a .Net web application using the SOAP API (ReportingService2010) to list details on SSRS reports.
For the next step, I need to get some usage statistics such as which reports are accessed the most, which reports are accessed most frequently etc.
I know you can get some of this from the ExecutionLog table, but I'd like to avoid the SQL approach. Is there a way to get usage statistics like this directly through the SOAP API?
Thanks.

Nope. Best you can get from the stock API is snapshot/cache history information. You could, however, extend the existing API (pulling the information from ExecutionLogStorage). Even though you'd still be building the methods yourself, at least you could wrap them up nicely within the existing webservice.

Related

Is it possible to convert my web API to an OData API?

Context:
I have an existing Web API of the following form:
http://site/api/{area}?slicer1=alpha&slicer1=beta&starttime=sometime&endtime=sometime
It's implemented in ASP MVC. The API's function processes the parameters and feeds them into a SQL query. On success, it returns a IHttpActionResult with JSON data from SQL.
Note that the API lacks an entity model or entity relationship diagram. It's essentially just a wrapper around a SQL query.
Question:
Recently, I started learning about OData. It seems like OData is designed for the URL itself to control how data is filtered, as opposed to authoring a custom SQL query to filter.
Hence, I'd like an answer on whether my Web API could be converted to an OData API and, if so, what OData capabilities I'd need to do that (for instance, might OData v4 functions be useful)?
Clarifications:
I don't have any code written, nor am I asking for code as an answer.
I am looking to know what OData capabilities might enable my scenario (v4 functions, actions, etc...), or if OData and Web APIs are so different that my ask doesn't make sense.
Anticipating the "Why are you asking" this questions - I'm just interested in technically feasibility as a learning exercise.
You could switch to a OData API but if you have no entities, i.e. no IQueryable to query, you'd still have to do the SQL command generation by yourself. It's not too hard though - we did it in the project I'm working in.
You also have to ask yourself to what extent you want to switch to OData. For example, you could decide to just use its (type-safe) query-string parsing capabilities and from the parsed filter tree generate your own SQL (as mentioned above).
On the other hand, having a fully fledged OData API would also dictate the response format to comply to the standard. This would mean that you can start connecting to your API using OData aware tools (e.g. Excel, KendoUI Grid, etc.). I don't know if that would give you any benefit for your use-case.
Your question
what OData capabilities I'd need to do that
is not exactly clear to me. There are no OData capabilities that would help you to migrate from a Web API to an OData API. OData is just a set of standardized query (CRUD) and response formats. You can also be OData compliant by implementing all on your own using out-of-the-box Web API facilities but probably you'd want to use the Web API OData package.
The most important question for you is to ask yourself what advantage you'd have using OData.

Scheduling an Unsampled Report via the API

I would like to schedule a few unsampled reports to run monthly on the first of the month. For each report I need one unsampled report for the previous month, and another for the previous year. Using the GA web interface, I can schedule a monthly report for 6 months, but I don't see a way to schedule a report to include the previous year's worth of data. Some other limitations are that I have to remember to schedule the report every 6 months, and that I can't see what reports have already been scheduled. All of which leads me to the conclusion that I have to use the API if I want to accomplish this.
So first off, according to the documentation, I believe I should be able to do that via the "Unsampled Reports: insert" api method.
https://developers.google.com/analytics/devguides/config/mgmt/v3/mgmtReference/management/unsampledReports/insert
First off, is that a correct assumption? Does the insert trigger an unsampled report for immediate processing?
Secondly, can I configure a report in the API the same way as I configure it in the web interface? For example, for certain reports I set the type to Flat Table. Not sure how I would specify that in the API or is that irrelevant when it comes to a custom report?
Thirdly, does the output end up in Google Drive the same as if I ran the unsampled report via the web interface?
I'd strongly recommend reading the developer overview of the unsampled reporting methods of the Management API. It'll give you a good sense of how the process works.
To answer some of your specific questions:
First off, is that a correct assumption? Does the insert trigger an unsampled report for immediate processing?
The process isn't necessarily immediate, but yes it does trigger a new unsampled report for processing.
Secondly, can I configure a report in the API the same way as I configure it in the web interface? For example, for certain reports I set the type to Flat Table.
No you don't get those same report types, you just get data back. You can, however, configure it the same way you'd configure a Core Reporting API request. To play around with how that works, I'd check out the Query Explorer.
Thirdly, does the output end up in Google Drive the same as if I ran the unsampled report via the web interface?
Yes, the end result should be the same.

Architecture For A Real-Time Data Feed And Website

I have been given access to a real time data feed which provides location information, and I would like to build a website around this, but I am a little unsure on what architecture to use to achieve my needs.
Unfortunately the feed I have access to will only allow a single connection per IP address, therefore building a website that talks directly to the feed is out - as each user would generate a new request, which would be rejected. It would also be desirable to perform some pre-processing on the data, so I guess I will need some kind of back end which retrieves the data, processes it, then makes it available to a website.
From a front end connection perspective, web services sounds like it may work, but would this also create multiple connections to the feed for each user? I would also like the back end connection to be persistent, so that data is retrieved and processed even when the site is not being visited, I believe IIS will recycle web services and websites when they are idle?
I would like to keep the design fairly flexible - in future I will be adding some mobile clients, so the API needs to support remote connections.
The simple solution would have been to log all the processed data to a database, which could then be picked up by the website, but this loses the real-time aspect of the data. Ideally I would be looking to push the data to the website every time the data changes or now data is received.
What is the best way of achieving this, and what technologies are there out there that may assist here? Comet architecture sounds close to what I need, but that would require building a back end that can handle multiple web based queries at once, which seems like quite a task.
Ideally I would be looking for a C# / ASP.NET based solution with Javascript client side, although I guess this question is more based on architecture and concepts than technological implementations of these.
Thanks in advance for all advice!
Realtime Data Consumer
The simplest solution would seem to be having one component that is dedicated to reading the realtime feed. It could then publish the received data on to a queue (or multiple queues) for consumption by other components within your architecture.
This component (A) would be a standalone process, maybe a service.
Queue consumers
The queue(s) can be read by:
a component (B) dedicated to persisting data for future retrieval or querying. If the amount of data is large you could add more components that read from the persistence queue.
a component (C) that publishes the data directly to any connected subscribers. It could also do some processing, but if you are looking at doing large amounts of processing you may need multiple components that perform this task.
Realtime web technology components (D)
If you are using a .NET stack then it seems like SignalR is getting the most traction. You could also look at XSockets (there are more options in my realtime web tech guide. Just search for '.NET'.
You'll want to use signalR to manage subscriptions and then to publish messages to registered client (PubSub - this SO post seems relevant, maybe you can ask for a bit more info).
You could also look at offloading the PubSub component to a hosted service such as Pusher, who I work for. This will handle managing subscriptions and component C would just need to publish data to an appropriate channel. There are other options all listed in the realtime web tech guide.
All these components come with a JavaScript library.
Summary
Components:
A - .NET service - that publishes info to queue(s)
Queues - MSMQ, NServiceBus etc.
B - Could also be a simple .NET service that reads a queue.
C - this really depends on D since some realtime web technologies will be able to directly integrate. But it could also just be a simple .NET service that reads a queue.
D - Realtime web technology that offers a simple way of routing information to subscribers (PubSub).
If you provide any more info I'll update my answer.
A good solution to this would be something like http://rubyeventmachine.com/ or http://nodejs.org/ . It's not asp.net, but it can easily solve the issue of distributing real time data to other users. Since user connections, subscriptions and broadcasting to channels are built in to each, that will make coding the rest super simple. Your clients would just connect over standard tcp.
If you needed clients to poll for updates then you would need a que system to store info for the next request. That could be a simple array, or a more complicated que system depending on your requirements and number of users.
There may be solutions for .net that I am not aware of that do the same thing, but those are the 2 I know of.

Passing calculated data over a WCF service layer for reporting

I'm not really sure how to best word this. We have an ASP.NET web application with the backend services accessible over a WCF service layer. We need to add some reporting/dashboard type bits to the web application.
To make it scalable the data needed for the reporting needs to be calculated on the backend. I'm just wondering if there is a recommended way to pass this data around. It doesn't make much sense to have different service methods to get the different bits of data, it feels like it should be summarised already.
I had a look at WCF Data Services, but that seems more for retrieving full object trees. Maybe some sort of XML document so extra items can be added to the summary without needing service layer changes?
The data would be things like number of orders today, number of orders specific to the person running it, open orders outstanding etc.
Does anyone have any pointers?
Thanks for your time
You can look at something like ASP.NET Web API and use an XML Formatter for your data. You can use ViewModels to flatten your data and send over the wire to your web app to bind to grids or whatever you need to.
Basically you would get request (filters, keywords, etc) from your web app, send the parameters to your reporting back-end, retrieve the reporting data, map the values to your ViewModels and serialize them using Web API. Using Web API you can use all kinds of formatters for your data to XML, CSV and JSON to vCard, iCal, PDF, etc...
You can read more about it here: http://www.asp.net/web-api

Looking for guidance on WF4

We have a rather large document routing framework that's currently implemented in SharePoint (with a large set of cumbersome SP workflows), and it's running into the edge of what SP can do easily. It's slated for a rewrite into .NET
I've spent the past week or so reading and watching WF4 discussions and demonstrations to get an idea of WF4, because I think it's the right solution. I'm having difficulty envisioning how the system will be configured, though, so I need guidance on a few points from people with experience:
Let's say I have an approval that has to be made on a document. When the wf starts, it'll decide who should approve, and send that person an email notification. Inside the notification, the user would have an option to load an ASP.NET page to approve or reject. The workflow would then have to be resumed from the send email step. If I'm planning on running this as a WCF WF Service, how do I get back into the correct instance of the paused service? (considering I've configured AppFabric and persistence) I somewhat understand the idea of a correlation handle, but don't think it's meant for this case.
Logging and auditing will be key for this system. I see the AppFabric makes event logs of this data, but I haven't cracked the underlying database--is it simple to use for reporting, or should I create custom logging activities to put around my actions? From experience, which would you suggest?
Thanks for any guidance you can provide. I'm happy to give further examples if necessary.
To send messages to a specific workflow instance you need to set up message correlation between your different Receive activities. In order to do that you need some unique value as part of your message data.
The Appfabric logging works well but if you want to create custom a custom logging solution you don't need to add activities to your workflow. Instead you create a custom TrackingParticipant to do the work for you. How you store the data is then up to you.
Your scenario is very similar to the one I used for the Introduction to Workflow Services Hands On Lab in the Visual Studio 2010 Training Kit. I suggest you take a look at the hands on lab or the Windows Server AppFabric / Workflow Services Demo - Contoso HR sample code.

Resources