I'm kind of puzzled by the following: I'm making use of OCUnit in an iPhone static library project. Why can't I see the NSLog output in the debug area? Is there a way to fix it? I can see the NSLog output just fine in the system console, but switching back and forth gets annoying after a while (or perhaps I should just get a 2nd screen).
Edit: I guess the reason I don't see NSLog output in the debugger is because otest runs the tests - this is the reason why the system console mentions otest as source for the NSLog output. Still would be nice if there was a way to somehow see this output in the debugger.
Related
The R Gui app on my Mac has been showing strange behavior lately. Whenever I use curser-up to get an item from the history, the App will add an empty line before the code. For every time I execute and re-call, it will add another. Below a screenshot of how that looks like.
Does anyone know how to fix this error, by any chance?
When I build my website project using visual studio 2016 (or any visual studio for that matter), the compiler gives me an error:
However if I go to the code file ResetPassword.aspx the edtEmployeeSurname control is present and it has a runat="server" attribute.
There is also no errors given to me if I open the ResetPassword.aspx.vb code file. (So no red lines under any variable names / Control ID's).
What is really interesting is that the website (Even ResetPassword.aspx) loads correctly from the browser without any issues and I can submit the form.
If I comment out all the code in ResetPassword.aspx.vb then it just finds another control that "Doesn't Exist" and so it carries on with a lot of pages.
All I want to know is:
What causes these issues
How to fix these issues OR how to determine what the issue is.
If this is a common mistake that some developers make then please help me to formulate a search string to use in google, because most of my search results were obscure or off topic.
I've run into this sort of thing before and I believe the errors you are seeing are red herrings. They lead me to believe that one of your lower projects where your user controls are defined, or possibly even lower than that, has a possibly unrelated error in it which is causing it to not be built by visual studio, which in turn makes visual studio think your user controls aren't defined.
What I normally do is build the solution and watch the output window. It will build all of your projects individually, the first error you see pop up in the output window is the source of your problem. Everything else you are seeing is a symptom of that original issue. If you fix the first error that shows up in your output then it will either build correctly or you will have to repeat the process with the next error that pops up.
Visual Studio used to order the errors in the error list in the order that they came up during the build but that has changed, I, personally, really preferred the old sorting(I think there is a setting that you can use to get back to the old sorting but I can't think of it off the top of my head).
From screenshots, it shows you have 50 errors in your project. There is no way an application will run successfully if you have not set to do so.
You could make application run, even if errors.
Check SO post :
Debugging runs even with compiler's errors in Visual Studio
Note:
If you have already cleaned and rebuilt solution,
Try running application in other browser or another computer, may be you have data shown from previous successful result.
Based on the wording of the error, I believe it's possible you are referring to some of these controls outside the code behind of ResetPassword.aspx. The latter part of the message says It may be inaccessible due to its protection level. By default, the backing variable for a control you place on a form is Protected and therefore cannot be seen outside the scope of that control or its inheritance chain.
Im using VoiceOver during development to test accessibility changes.
Many times VoiceOver detects changes properly, starts reading them, but is interrupted with new information. So the information that is important is essentially cancelled when additional changes present themselves.
In my case I have an alert that's very important, but ancestor changes seem to get read instead.
If I could see a log of everything VoiceOver is saying I can at least be confident the text is being read and figure out a way to mitigate the problem (possibly by delaying it)
Is there anyway to get a Voiceover log?
I don't believe there is any way to print out a log, but you can save the output to an audio file by pressing ctrl-option-shift-Z. If the audio is running too quickly you could try slowing it down or using some commands to repeat the output. Some of the commands listed here might be helpful:
http://lab.dotjay.co.uk/notes/voiceover-commands/
I use squish-4.2.2 for testing GUI of our tool and use purecov.i386_linux2.7.3 for covering them. Our tools are based on qt-4.7.4_qsci version of QT. After building our tools in Purecov mode, when we run our tests, they fail if tests contain operation with "popsup menu". Purecov cannot generate the result *.pcv file. Also I would like to note that our tools do not fail when they are run without Squish, however "Popsup Menu" opens not earlier than after 30-60 seconds (in normal mode it is done during 1-2 seconds).
So I have 2 issues:
1. when tests are run with Squish, they fail when tests contain operation with "Menu" item;
2. Purecov does not generate *.pcv file when tests fail.
I tried to find some interesting things on your site for resolving those problems, but I couldn't find anything related to my issues.
In my opinion, Squish failed because when I try to open "Menu" item, GUI runs faster than its logic part, and after opening "Menu" item, Squish considers that operation is done and kills my tool.
Could you please tell me what I can do with my tests or tools for resolving those problems?
Thanks.
I had a similar issue in the past, with clicking menus from my app.
I hope this helps you also!
Example:
I wanted to open a "File" menu, followed by a sub-menu (which is pop-up) "New". When I'm in record mode of Squish, Squish records the following code in python:
activateItem(waitForObjectItem(":MainWindowForm.m_poMainMenu_QMenuBar", "File"))
activateItem(waitForObjectItem(":MainWindowForm.menuFile_QMenu", "New..."))
Now, this didn't worked all the time, honestly I didn't managed to understand why :).
But, I found on their site, this a possible solution. So, I've replaced Symbolic names from the code above, and created a function that calls the objects after Real name properties:
def callMenu(menu_name, submenu_name):
activateItem(waitForObjectItem("{type='QMenuBar' visible='true'}", menu_name))
activateItem(waitForObjectItem("{type='QMenu' title='%s'}" % menu_name, submenu_name))
After I made this change, the tests run smooth, without anymore problems (at least from the menu side).
I am new to robot framework. I want to know how to capture screenshots on failure.
Doesnt robot framework automatically take screenshots if script fails?
An example will be of great help!
this is actually a feature of the Selenium2Library that would be required with Robot if you were doing Selenium based tests.
More information can be found here: http://robotframework.org/Selenium2Library/doc/Selenium2Library.html
As it says it the documentation, setting up screenshots on failure is very easy, for example here is mine from a test suite I'm working with:
Library Selenium2Library timeout=10 implicit_wait=1.5 run_on_failure=Capture Page Screenshot
You can use the below keyword to capture screen shot after any of the step you want :
Capture Page Screenshot
Hope so this was helpful!
In this case teardown will be executed once the test case is executed/not executed and if the test case fail, it will take screenshot:
[Teardown] Run Keyword If Test Failed Capture Page Screenshot
Or you can do it even better on suite level if you don't need different teardowns for particular tests:
[Test Teardown] Run Keyword If Test Failed Capture Page Screenshot
So far, all of the other answers assume that you are using Selenium
If you are not, there is a "Screenshot" library that has the keyword "Take Screenshot." To include this library, all you need to do is put "Library Screenshot" in your settings table.
In my robotframework code, my teardowns all just reference a keyword I made called "Default Teardown" which is defined as:
Default Teardown
Run Keyword If Test Failed Take Screenshot
Close All
(I think that the Close All might be one of my custom keywords).
I have noticed a few issues with the Take Screenshot keyword. Some of this may be configurable, but I don't know. First off, it will take a screenshot of your screen, not necessarily just the application that you are interested in. So if you're using this and are allowing other people to view the resulting screenshots, make sure that you don't have anything else on your screen that you wouldn't want to share.
Also, if you kick off your tests and then lock your screen so you can take a quick break while it runs, all of your screenshots will just be pictures of your lock screen.
I'm using this on my Jenkins server as well which is using the xvfb-run command to create sort of a fake GUI to run the robot framework tests. If you're doing this as well, make sure that in your xvfb-run command you include something along the lines of
xvfb-run --server-args="-screen 0 1024x768x24" <rest of your command>
You'll have to decide what screen resolution works the best for you, but I found that with the default screen resolution, only a small portion of my app was captured.
Long story short, I think that you're better off using Capture Page Screenshot if you're using selenium. However if you're not, this may be your best (or only) solution.