I have an VB ASP.NET (.aspx) file that has deeply nested logic and I'm getting lots of build errors like "If must end with a matching End If" and "Do must end with a matching Loop". How do I begin to debug this beast to at least get it to build?
The simple answer is to remove a large nested section and if it passes add a bit more until it fails. That's how I approach a problem like this.
Debugging starts after you get it to compile. Really effed code that won't compile you sometimes have to comment out blocks of code fix whats left and uncomment. Also if it just went haywire all of a sudden check for quote issues as a missed quote and break the whole file.
I have found that Visual Studio will generate the correct ending statements, so you have probably deleted a line by mistake, or commented one out.
As a rule, I try to avoid deeply nested statements. Can you refactor? A sequence of IF / ELSE IF / ELSE IF / ELSE / END IF construction is easier for the human eyes and mind to parse. Maybe even temporarily take some deep logic and make a temporary function. Remember, someone will have to maintain your code - and even if that someone is you, in 12 months time, complex structures are almost intelligible.
Related
I have added a reference to a DLL from another project (contained in the bin folder) and I have set Copy local to true. When I step through the code; the debugger jumps all over the place. I believe this is because the code is optimised. I have two questions:
Is this because the code is optimised
If (1) is true then why can I step through the code in the first place i.e. without Reflector.
My guess is the jumping is due to the PDB (symbols) being out of sync with the compiled DLL, thus the symbols tell VS to go to a line number that does not actually match up with what the code is actually doing; optimization may also play a part as well, because of in-lining functions.
Other things that influence the debugging experience are:
Just My Code setting
Methods explicitly marked with DebuggerNonUserCode attribute
Debugging optimized code may "jump around", as some functions become inlined. The most telling thing is that local variables usually get optimized away, giving a message to that effect when trying to read them.
If the jumping seems to make very little sense, though, then it's more likely you have the wrong PDB (which maps to line numbers) or source (which has the line numbers).
Can I know how an experienced OCaml developer debugs his code?
What I am doing is just using Printf.printf. It is too troublesome as I have to comment them all out when I need a clean output.
How should I better control this debugging process? special annotation to switch those logging on or off?
thanks
You can use bolt for this purpose. It's a syntax extension.
Btw. Ocaml has a real debugger.
There is a feature of the OCaml debugger that you may not be aware of which is not commonly found with stateful programming and is called time travel. See section 16.4.4. Basically since all of the information from step to step is kept on the stack, by keeping the changes associated with each step saved during processing, one can move through the changes in time to see the values during that step. Think of it as running the program once logging all of the values at each step into a data store then indexing into that data store based on a step number to see the values at that step.
You can also use ocp-ppx-debug which will add a printf with the good location instead of adding them manually.
https://github.com/OCamlPro-Couderc/ocp-ppx-debug
Our application is made in good (?) ol' classic ASP. Not ideal but it works and it's pretty stable - has been for 10-15 years. It is not particularly well documented in places, such as where a 'translation' (client-controlled piece of text) appears. All we have against a translation is a clientid and translationid, neither of which are particularly helpful. I've tried searching the (10s of thousands of lines of) core code for gettrans(1) (translation 1) and can see that doing this for another 3100 is going to be a nightmare, not to mention inaccurate as there are many functions which are called with a transid passed into them, and then they call gettrans(transid).
My last thought on this matter is the possibility that we could maybe detect, from gettrans, where a function is called from - not just the line number but the file name of the include (thankfully the includes are named usefully so figuring out where a translation is used should not be too hard!). I highly doubt that it would be possible to get the include name on the basis that includes are processed before ASP, but I'll settle for the overall filename and then we can combine the includes to get to the line of code and log the include file name.
I very much doubt this is possible and can't find anything on SO or Google. Does anyone know of any way to achieve this, or have any pointers on what I might try? Thanks in advance.
Regards,
Richard
Most you can achieve is getting the currently executing script, which can be obtained by:
Dim currentPage
currentPage = Request.ServerVariables("SCRIPT_NAME")
When inside included page it will give you the "parent" page.
However getting "callee" information is not possible with classic ASP as far as I know, so you will have to add another parameter to the function being called then change all calls to pass the parameter in order to identify where it comes from. Looks like someone did something similar and called it ASP Profiler, use it at your own risk of course. (Never tried myself)
I'm trying to analyze some running times for various methods in my default.aspx.cs page. I need to use TextWriterTraceListener.
I have it set up so that output is being redirected to the file. However, the data isn't what I want.
As far as I can tell if you simply enable tracing and have the trace appended to the bottom of the webpage you get this nice table with times in between trace calls. It's these times that I care about. An example can be seen here...
http://dotnetperls.com/trace-basics-aspnet
Using TextWriterTraceListener say, for example, I add to the top of Page_Init
System.Diagnostics.Trace.TraceInformation("Begin Page_Init");
and to the bottom...
System.Diagnostics.Trace.TraceInformation("End Page_Init");
here's the output I get...
WebDev.WebServer.EXE Information: 0 : Begin Page_Init
WebDev.WebServer.EXE Information: 0 : End Page_Init
This isn't helpful to me. For one, I don't know what the 0 means. Maybe it's the time, but it's not recognizing anything less than 1 second? I don't know.
I can't find any good resources on this, but I know I have to be missing something, this seems not very useful as is. How do I get times without resorting to some sort of trickery?
Edit: I should note that I found this...
Formatting trace output
However, it seems the consensus is to use log4net or roll your own. Is this really the answer? I'm unsure how this method of tracing could be that useful without any type of timing.
Ok, well writing out Stopwatch values works fine so I guess I'll just do that. It just seems like it should work just the same as when you enable Tracing to the page. I guess not though.
Anyway as far as I can tell, the answer is to use your own method of timing in conjunction with TextWriterTraceListener.
It seems like a fairly large hassle to set up a proper debug environment in ASP.NET and I'm just wondering if using Asserts are the way to go or not. I've read a bit and saw that you need to modify your web.config to properly use Asserts. Is this usually the best way to go or are there other methods of debugging that might be easier to use?
We don't use a unit testing framework so that isn't really relevant to the question.
How do you know the difference between them working properly or not working at all? Currently I can put in asserts in my code and it will do absolutely nothing because they are not configured in the web.config. This seems dangerous to me.
I would direct you here: When should I use Debug.Assert()?. There are several good answers that can tell you when it's good to use them, and you can figure out from there if it's worth it in your app.
Having Debug Asserts will ensure your code is correct. With the right combination of test cases will definitely help you.
Several Unit test frameworks come with handlers that can log messages and throw exceptions on asserts. Choosing one of these framework or writing your own handler is something that you may have to think about. But once the Unit test code catches these exceptions, they should be logged and marked as failed.