Where can I view rountrip information in my ASP.NET application? - asp.net

I'm playing around with storing application settings in my database, but I think I may have created a situation where superfluous roundtrips are being made. Is there an easy way to view roundtrips made to an MS Access (I know, I know) backend?
I guess while I'm here, I should ask for advice on the best way to handle this project. I'm building an app that generates links based on file names (files are numbered ints, 0-5000). The files are stored on network shares, arranged by name, and the paths change frequently as files are bulk transfered to create space, etc.
Example:
Files 1000 - 2000 go to /path/1000s
Files 2001 - 3000 go to /path/2000s
Files 3001 - 4000 go to /path/3000s
etc
I'm sure by now you can see where I'm going with this. Ultimately, I'm trying to avoid making a roundtrip to get the paths for every single file as they are displayed in a gridview.
I'm open to the notion that I've gone about this all wrong and that my idea might be rubbish. I've toyed around with the notion of just creating a flat file, but if I do that, do I still run into the problem of having that file opened and closed for every file displayed in a gridview?

1) set A breakpoint in the first line of page_load section in code by clicking in the leftmost bar (a dim thick line down the left side). You should se a round and red mark there then
2) ... and run Debug in Visual Studio (hit F5)
3) Turn back to Visual Studio after the app has started and step through the program, line-by-line, by pushing the F8 button. Great fun

Related

ASP.NET Browse and select a file on local PC error [duplicate]

This question already has answers here:
IIS7 - The request filtering module is configured to deny a request that exceeds the request content length
(3 answers)
Closed 3 months ago.
I am using this URL to select a file from a local PC:
https://asp.net-tutorials.com/controls/file-upload-control/
If I select a file which is about 17 characters long with 3 character extension, I get error:
HTTP Error 404.13 - Not Found
The request filtering module is configured to deny a request that exceeds the request content length.
Example of the file is My123LongFileName.mp3
If I select a different file, smaller (e.g. File123.mp3), no error.
How do I allow my ASP.NET app to select long name files?
I think and suggest in this day and age, if you looking to setup some kind of file-uploading page and system?
Use a ajax enabled file up-loader.
They are great, since they up-load in "small chunks". This not only allows one to up-load files of really any size, but also means that the user gets a nice progress bar during up-load (so for a larger file it don't look like the browser has frozen). Even better, is they also allow a graceful cancel of such up-loads and respond "instant" to a user deciding to cancel.
There are a good number of file up-loaders free for the taking.
For example, I use the ajaxtoolkit up-loader, and it looks like this:
So, it allows multiple files, has a progress bar. And even has a "hot spot" in which you can just drag + drop files onto the web page.
So, during a up-load you have a progress bar (and cancel upload).
Looks like this:
It also has a rather "nice" server-side event model.
(on startup-load, on-upload one file done, on all done).
So, after up-loading, then I can display the files like this:
So, I don't think it worth to "roll" your own file up-loader, and as noted, there are many sample up-loaders you can adopt - many are free like the above one (ajaxfileUpload control) from the ajaxtoolkit for webforms).
As noted, since this up-loader uses "small" chunks for the file, then up-load size is quite much un-limited in size, and you don't overload the browser and server attempting to push up some huge "one big file" in a single post-back. In fact, the above up-loader runs and does not DO ANY post-backs.

DataFactory copies files multiple times when using wildcard

Hi all complete ADF newbie here - I have a strange issue with DataFactory and surprisingly cant see that anyone else has experienced this same issue.
To summarize:
I have setup a basic copy activity from blob to an Azure SQL database with no transformation steps
I have setup a trigger based on wildcard name. I.e. any files loaded to blob that start with IDT* will be copied to the database
I have loaded a few files to a specific location in Azure Blob
The trigger is activated
As soon as it looks like it all works, a quick assessment of the record count shows that the same files have been imported X number of times
I have analysed what is happening, basically when I load my files to blob, they don't technically arrive at the exact same time. So when file 1 hits the blob, the wildcard search is triggered and it finds 1 file. Then when the 2nd file hits the blob some milliseconds later, the wildcard search is triggered again and this time it processes 2 files (first and second).
The the problem keeps compounding based on the number of files loaded.
I have tried multiple things to get this fixed to no avail, because fundamentally it is behaving "correctly".
I have tried:
Deleting the file once it has processed but again due to the millisecond issue the file is technically still there and can still be processed
I have added a loop to process 1 file at a time then deleting the file before the next is loaded based on file name in the blob but hasn't worked (and cant explain why)
I have limited ADF to only 1 concurrent connection, this reduces the number of times it has duplicated but unfortunately still duplicates it
Tried putting a wait timer at the start of the copy activity, but this causes a resource locking issue. I get an error saying that multiple waits are causing the process to fail
Tried a combination of 1,2 and 3 and i end up with an entirely different issue in that it is trying to find file X, but now no longer exists because it was deleted as part of step 2 above
I am really struggling with something that seems extremely basic. So i am sure it is me overlooking something extremely fundamental as noone else seems to have this issue with ADF.

SSRS dynamic report generation, pdf and subscriptions?

If this question is deemed inappropriate because it does not have a specific code question and is more "am I barking up the right tree," please advise me on a better venue.
If not, I'm a full stack .NET Web developer with no SSRS experience and my only knowledge comes from the last 3 sleepless nights. The app my team is working on requires end users to be able to create as many custom dashboards as they would like by creating instances of a dozen or so predefined widget types. Some widgets are as simple as a chart or table, and the user configures the widget to display a subset of possible fields selected from a larger set. We have a few widgets that are composites. The Web client is all angular and consumes a restful Web api.
There are two more requirements, that a reasonable facsimile of each widget can be downloaded as a PDF report upon request or at scheduled times. There are several solutions to this requirement, so I am not looking for alternate solutions. If SSRS would work, it would save us from having to build a scheduler and either find a way to leverage the existing angular templates or to create views based off of them, populate them and convert that to a pdf. What I am looking for is he'll in understanding how report generation best practices and how they interact witg .NET assemblies.
My specfic task is to investige if SSRS can create a report based on a composite widget and either download it as a PDF or schedule it as one, and if so create a POC based on a composite widget that contains 2 line graphs and a table. The PDF versions do not need to be displayed the same way as the UI where the graphs are on the same row and the table is below. I can show each graph on its' own as long as the display order is in reading order. ( left to right, then down to the next line)
An example case could be that the first graph shows the sales of x-boxes over the course of last year. The line graph next to it shows the number of new releases for the X-Box over the course of last year. The report in the table below shows the number of X-box accessories sold last year grouped by accessory type (controller, headset, etc,) and by month, ordered by the total sales amount per month.
The example above would take 3 queries. The queries are unique to that users specific instance of that widget on that specific dashboard. The user can group, choose sort columns and anything else that is applicable.
How these queries are created is not my task (at least not yet.) So there is an assumption that a magic query engine creates and stores these sql queries correctly in the database.
My target database is sql 2012 and its' reporting service. I'm disappointed it only supports the 2.0 clr.
OI have the rough outline of a plan, but given my lack of experience any help with this would be appreciated.
It appears I can use the Soap service for scheduling and management. That's straight forward.
The rest of my plan sounds pretty crazy. Any corrections, guidance and better suggestions would be welcome. Or maybe a different methodology. The report server is a big security hole, and if I can accomplish the requirements by only referencing the reporting names paces please point me in the right direction. If not, this is the process I have cobbled together after 3 days of research and a few msdn simple tutorials. Here goes:
To successfully create the report definition, I will need to reference every possible field in the entire superset available. It isn't clear yet if the superset for a table is the same as the superset for a graph , but for this POC I will assume they are. This way, I will only need a single stored procedure with an input parameter that identifies the correct query, which I will select and execute. The result set will be a small subset of the possible fields, but the stored procedure will return every field, with nulls for each row of the omitted fields so that the report knows about every field. Terrible. I will probably be returning 5 columns with data and 500 full of nulls. There has to be a better way. Thinking about the performance hit is making me queasy, but that was pretty easy. Now I have a deployable report. I have no idea how I would handle summaries. Would they be additional queries that I would just append to the result set? Maybe the magic query engine knows.
Now for some additional ugliness. I have to request the report url with a query string that identifies the correct query. I am guessing I can also set the scheduler up with the correct parameter. But man do I have issues. I could call the url using httpWebRequest for my download, but how exactly does the scheduler work? I would imagine it would create the report in a similar fashion, and I should be able to tell it in what format to render. But for the download I would be streaming html. How would I tell the report server to convert it to a pdf and then stream it as such? Can that be set in the reports definition before deploying it? It has no problem with the conversion when I play around on the report server. But at least I've found a way to secure the report server by accessing it through the Web api.
Then there is the issue of cleaning up the null columns. There are extension points, such as data processing extensions. I think these are almost analogous to a step in the Web page life cycle but not sure exactly or else they would be called events. I would need to find the right one so that I can remove the null data column or labels on a pie chart at null percent, if that doesn't break the report. And I need to do it while it is still rdl. And just maybe if I still haven't found a way, transform the rdl to a pdf and change the content type. It appears I can add .net assemblies at the extension points. But is any of this correct? I am thinking like a developer, not like a seasoned SSRS pro. I'm trying, but any help pushing me in the right direction would be greatly appreciated.
I had tried revising that question a dozen times before asking, and it still seems unintelligible. Maybe my own answer will make my own question clear, and hopefully save someone else having to go through what I did, or at least be a quick dive into SSRS from a developer standpoint.
Creating a typical SSRS report involves (quick 40,000 foot overview)
1. Creating your data connection
2. Creating a SQL query or Queries which can be parameterized.
3. Datasets that the query result will fill
4. Mapping Dataset columns to Report Items; charts, tables, etc.
Then you build the report and deploy it to your report server, where the report can be requested by url with any SQL parameters Values added as a querystring:
http://reportserver/reportfolder/myreport?param1=data
How this works is that an RDL file (Report Definition Language) which is just an XML document with a specific schema is generated. The RDL has two elements that were relevant to me, and . As the names infer, the first contains the queries and the latter contains the graphs, charts, tables, etc. in the report and the mappings to the columns in the dataset.
When the report is requested, it goes through a processing pipeline on the report server. By implementing Interfaces in the reporting services namespace, one could create .NET assemblies that could transform the RDL at various stages in the pipeline.
Reporting Services also has two reporting API's. One for managing reports, and another for rendering. There is also the reportserver control which is a .NET Webforms control which is pretty rich in functionality and could be used to create and render reports without even needing a report server instance. The report files the control could generate were RDLC files, with the C standing for client.
Armed with all of this knowledge, I found several solution paths, but all of them were not optimal for my purposes and I have moved on to a solution that did not involve reporting services or RDL at all. But these may be of use to someone else.
I could transform the RDL file as it went through the pipeline. Not very performant, as this involved writing to the actual physical file, and then removing the modifications after rendering. I was also using SQL Server 2012, which only supported the 2.0/3.5 framework.
Then there were the services. Using either service, I could retrieve an RDL template as a byte array from my application. I wasn't limited by the CLR version here. With the management server, I could modify the RDL and deploy that to the Report Server. I would only need to modify the RDL once, but given the number of files I would need and having to manage them on the remote server, creating file structures by client/user/Dashboard/ReportWidget looked pretty ugly.
Alternatively, I instead of deploying the RDL templates, why not just store them in the database in byte array format. When I needed a specific instance, I could fetch the RDL template, add my queries and mappings to the template and then pass them to the execution service which would then render them. I could then save the resulting RDL in the database. It would be much easier for me to manage there. But now the report server would be useless, I would need my own services for management and to create subscriptions and to mail them I would need a queue service and an SMTP mailer, removing all the extras I would get from the report server, need to write a ton of custom code, and still be bound by RDL. So I would be creating RDLM, RDL mess.
It was the wrong tool for the job, but it was an interesting exercise, I learned more about Reporting Services from every angle, and was paid for most of that time. Maybe a blog post would be a better venue, but then I would need to go into much greater detail.

Uploading multiple/large files

I have this page where a user can upload documents (multiple documents, size limit 10MB each). It is a two step process. Step 1 has the input form. Step 2 is the preview page with a submit button.
How should I handle the scenario where the user closes the browser while on the preview page, without submitting the form? Should I save the files in a temp location after step 1? Is this a decent solution?
And what are the best practices in general for uploading (reasonably) large files?
Thanks.
Take a look at this:
http://www.codeproject.com/Articles/68374/Upload-Multiple-Files-in-ASP-NET-using-jQuery
One way or another, you'll probably end up looking at a jQuery/AJAX control to do this.
You can use a temporary folder to save the files and copy the files to their final location only on submission of the form.
In any case, it would be better to implement a garbage collector. The garbage collector can empty the temporary folder every night. But when using a garbage collector, if you have a way to identify files that were not submitted (for example, if a row is added to a database upon submission), you can put the files in their final location from the beginning, and let the garbage collector remove them every night.
Upload of large files can be done using a JQuery UI plugin such as Uploadify: http://www.uploadify.com/.
You should pay attention that it uses flash, which on the one hand is very good for uploading large files, but on the other hand it will prevent your application from supporting Apple machines such as iPad.
If the user leaves, then let them start over. More than likely they left for a good reason. If there was a crash, leave the responsibility on their end. If you choose to store their data without them submitting this could allow malicious users to exploit your storage.
You can also look into a process called chunking.
For a more in depth discussion on file uploads in mvc3, see this SO post: MVC 3 file upload and model binding

Flex web application gets progressively slower and freezes

I have a Flex web application where I am visualizing data (for different countries) in the form of charts. The data is in the form of CSV files. There are individual files for individual charts i.e. one file has all data pertaining to one chart for all countries.
I have a left navigation menu that allows one to see data on a country by country basis. As I view more and more countries, the web application becomes progressively slower till it freezes completely. The problem goes away if I refresh the browser and empty the cache.
I am using the URLLoader class in flex to read the CSV data into a string and then I am parsing the string to generate the charts.
I realize this is happening because more and more data is somehow accumulating in the browser. Is there any way in Flex to rectify this? Any pointers/help would be appreciated.
Thanks
- Vinayak
Like #OXMO456 said before my, I would use the profiler to check this issue.
to refine my answer I would also say please make sure that you are following all of the rules for low memory in flex like
1. clearing out (removing) event listeners
2. nulling out static variables
and more like so.
I would use the "snapshot" feature of the profiler and see what is happening in minute 1 and then minute 2, the difference between the two of these is probably the source of your leak.

Resources