Generated Code in VS2013 line endings handled differently in same file - asp.net

I have read most, if not all, of the related questions on the site, but I don't see the same problem described, so...
I am generating some webforms code (both .aspx and .aspx.cs files), however my problem seems to be confined to the .aspx files only (the 'display' side).
When I open a file that was generated with my tool, some lines of code are combined on a single line in the VS2013 editor, and some lines of code are not.
I modified the code generator to add a System.Environment.NewLine at the end of every line to see if I could produce consistency. This was obviously the wrong action to take. It did solve the problem of some lines of code being concatenated horizontally in the editor, but other lines of code are now consistently double-spaced.
This different treatment of the line-endings appears to work differently between:
(1) JavaScript code and plain text lines -- I have to insert the .NewLine at the end of every JavaScript line and every line that might contain simple text to get a separate line.
(2) HTML and asp.net/html code -- if I insert a .NewLine at the end of these lines of code they will come out double-spaced.
I realize I can address this by modifying my code generator to handle the different types of code in different manners, but does anyone know why these lines of code would be handled differently to begin with? I'm just trying to learn and understand where these characteristics are coming from. Altering the Advanced Save Options doesn't seem to have any effect on this situation.
Thanks in advance!
Lynn

Thanks for the consideration over the week-end. It is most appreciated.
The line of code that is introducing the System.Environment.NewLine is:
SrcCode_Data.SrcCodeCode = SourceText_Data.SourceTextCode + System.Environment.NewLine;
The generated data (the contents of SrcCode_Data.SrcCodeCode above) contains:
" function SetAddItemButton() {\r\n" (obviously minus the double-quotes).
I realize I should probably add one more action that is going on here that will very possibly could change things a bit...
The lines of code being generated (such as the one above for the Javascript function) are being written to a SQL Server database table/records during the generation process using sequence numbers that put the generated code into the appropriate sequence upon retrieval (by sorting these generated sequence numbers, 3 columns). When retrieved at the end of the generation process, the database records are read back in using POCO classes, and the code values are written to the appropriate .aspx and .aspx.cs source code files. My apologies for having left this out prior.
Any thoughts?
Lynn

Related

Trying to read multiple lines from txt file separated with :, but I'm getting imbRecoverableException caught from worker -> parseNext

As I'm new to IBM MQ and IIB I'm trying to experiment around with online tutorials. At the moment I'm trying to make a simple app that reads several lines in txt file separated by colon and writes them into XML file. Currently I'm stuck at reading multiple lines from file. I know how to make it work with only one line, but can't with more than one. I do know that there should be a parent-child relationship between two complex types but can't configure them properly. Also im using RFHUtil to send message file into queue.
Since I can't find much googling it, I hope someone with right knowledge could help around.
Don't have any code, but got my message definition picture: http://prnt.sc/nv9npr
Here is the error I'm getting: http://prnt.sc/nv9nyi
So two things I can see in your current screen shots.
In the first screenshot I can see \r\n i.e. CRLF which indicates that your separator needs to either be CRLF or your model needs to deal with the CRLF.
In the second you've got a partially parsed message. Try setting the Advanced Parser options on your Input node to ParseComplete things will still blow up but you should get some better diagnostic information in the ExceptionList.
Looks like you are trying use the MRM parser which has been replaced by the DFDL parser. I suggest you find some tutorials on the DFDL parser, it's much more efficient. Also there is support built into the Toolkit which will let you debug the Message Model you create Testing a DFDL schema by parsing test input data

Referencing between CodeDomObjects and Text

I am writing a CodeGenerator. The string output is later on mixed with user code.
In order to be able to make changes to my generated code after the user edits the file, I have to make sure my generated parts are not editable.
I am currently not sure how to achieve this behaviour..
If I would be able to track which line correlates to which CodeDomObject and vice versa I could tell my TextEditor to mark Lines as read only.
But at this Point I have two problems.
I don't know how to keep track.
I am not sure if my solution ( which I am not able to implement... ) is
clean. There would be a lot of overhead, because I have to find out
which object is generated and which not. I could do so by comparing a
generated CodeDomTree with the actual Tree and marking the diffs as
UserObjects.
I don't know if this solution is practical in your environment but this is handled using partial classes where required in Visual Studio.
An example would be Windows Forms where the visual designer is responsible for creating source code to reproduce the form at runtime but the developer is expected to add event handling code to the class.
By having the developer's code in a separate source file it doesn't get overwritten when changes are made in the designer.
Of course, this won't help you if you need to have a single source file for your class.

Classic ASP - Detect line number and file from which a function is called?

Our application is made in good (?) ol' classic ASP. Not ideal but it works and it's pretty stable - has been for 10-15 years. It is not particularly well documented in places, such as where a 'translation' (client-controlled piece of text) appears. All we have against a translation is a clientid and translationid, neither of which are particularly helpful. I've tried searching the (10s of thousands of lines of) core code for gettrans(1) (translation 1) and can see that doing this for another 3100 is going to be a nightmare, not to mention inaccurate as there are many functions which are called with a transid passed into them, and then they call gettrans(transid).
My last thought on this matter is the possibility that we could maybe detect, from gettrans, where a function is called from - not just the line number but the file name of the include (thankfully the includes are named usefully so figuring out where a translation is used should not be too hard!). I highly doubt that it would be possible to get the include name on the basis that includes are processed before ASP, but I'll settle for the overall filename and then we can combine the includes to get to the line of code and log the include file name.
I very much doubt this is possible and can't find anything on SO or Google. Does anyone know of any way to achieve this, or have any pointers on what I might try? Thanks in advance.
Regards,
Richard
Most you can achieve is getting the currently executing script, which can be obtained by:
Dim currentPage
currentPage = Request.ServerVariables("SCRIPT_NAME")
When inside included page it will give you the "parent" page.
However getting "callee" information is not possible with classic ASP as far as I know, so you will have to add another parameter to the function being called then change all calls to pass the parameter in order to identify where it comes from. Looks like someone did something similar and called it ASP Profiler, use it at your own risk of course. (Never tried myself)

Displaying XML using CSS: How to handle &nbsp?

I'm dealing with a lot of .xml files. (Millions - an .xml formatted dump of Wikipedia) and they're a lot more unreadable than I imagined.
For the time being, I've written a .css file to display them in a readable manner in a browser, and wrote a script to plug a reference to this .css into all the files.
(I know there's other solutions, like XSLT - but all the information I found made it seem document-level which didn't suit - I'm really trying not to expand the size of these files if possible)
The .css works fine for some of the files, but many contain entities like &nbsp and I get errors like:
"XML Parsing Error: undefined entity" with a nice little illustration pointing to &nbsp or it's kin within a quote.
There is an articles.dtd file, which seems like it should connect the dots ( keyword -> Unicode ) for the browser. It is referenced in each file like:
<!DOCTYPE article SYSTEM "../article.dtd">
and contains a lot of entries like:
<!ENTITY nbsp " "> <!-- no-break space = non-breaking space,
U+00A0 ISOnum -->
but either I'm entirely misunderstanding what this file is for, or it's not working correctly.
In any case; How can I make these documents display; Either by:
displaying the entities (like "&nbSp" as plain-text)
removing the entities altogether (by any means other than just a linear search/removal of them in the actual files)
Interpreting the entities as unicode, as they were intended
Naturally, the latter being preferable; absolutely ideally, by referencing some sort of external file that maps identities to Unicode (if that's not what the articles.dtd file is for....)
EDIT: I'm not working with a powerful machine here.. extracting the .rars took days. Any sort of edits to each file would take a very long time.
It is not very good way, just workaround: try to replace with  
so I've since solved my problem: if it helps anyone in future:
It turned out the guts of my problem was that external .dtd files are totally deprecated.
The function of the .dtd was in fact to declare the entities I was having trouble with( etc) as I thought; but because external .dtd files are not supported by browsers any more (the browsers simply don't fetch/parse them - and the only way to force them to depends on files in the install of the browser on the client-machine) the entities went undeclared.
I had sourced an .XML collection that was simply too old to be up to standards; without realizing it.
The solution best for my circumstances turned out to be lazy-processing of each file as it was requested. with a simple flag to differentiate processed from not.

Tool to calculate # of lines of code in code behind and aspx files?

Looking for a tool to calculate the # of lines of code in an asp.net (vb.net) application.
The tricky part is that it needs to figure out the inline code in aspx files also.
So it will be lines of code in vb files (minus comments) plus the inline code in aspx files (not all the lines of aspx files, just the code between <% %> tags.
SlickEdit has some feature for that. I am not sure if it counts inline code. Worth giving it a try. If it does not work, let me know so that I can update my post.
The SLOC Report
The SLOC Report tool provides an easy way to count the lines of code. The line count is divided into three categories: code, comments, and whitespace. Once the lines of code have been counted, the results are drawn as a pie graph. SLOC reports may be generated for solutions, projects or individual files.
From a previous post, Source Monitor appears to be the answer and NDepend for .NET.
I've not tried it myself but LineCounterAddin is visual studio plugin that includes the step-by-step guide to it's creation. It supports the formats you're asking about (VB and ASPX) as well as heaps more (e.g. XML, XSD, TXT, JS, SQL...).
I've had great experience with CLOC. It has a wide variety of command line options. One counter-intutive thing with it, though, the first command line argument is the directory to begin counting in, usually you can just place cloc into the parent directory of your source and use "." (it goes through subdirectories of the specified directory).

Resources