Log of everything VoiceOver is saying - accessibility

Im using VoiceOver during development to test accessibility changes.
Many times VoiceOver detects changes properly, starts reading them, but is interrupted with new information. So the information that is important is essentially cancelled when additional changes present themselves.
In my case I have an alert that's very important, but ancestor changes seem to get read instead.
If I could see a log of everything VoiceOver is saying I can at least be confident the text is being read and figure out a way to mitigate the problem (possibly by delaying it)
Is there anyway to get a Voiceover log?

I don't believe there is any way to print out a log, but you can save the output to an audio file by pressing ctrl-option-shift-Z. If the audio is running too quickly you could try slowing it down or using some commands to repeat the output. Some of the commands listed here might be helpful:
http://lab.dotjay.co.uk/notes/voiceover-commands/

Related

Can I filter error messages from certain sources in the Chrome Dev console?

In the Chrome DevTools console, I keep getting error messages from certain places that do not actually affect my application's performance. Is there a way to filter out errors from those sources? (e.g., a YouTube iframe with errors, certain Chrome Extensions, etc.)
Yes, you can. You are able to filter messages from any file source by right-clicking on the file's name and line (something like main.js:15) and selecting Hide messages from *filename*. This will block all messages coming from that file (as is probably self-explanatory given what the button says).
Warning: This will also block messages using console.log() that might harm your debugging process, as well as errors that might come up in the future and be important (which you will now not know about). Use with caution on your own files. It should be harmless with files that aren't yours (again, things like iframes and extensions).
You can reverse the block by going up to the Filter dialog box near the top of the console (to the right of the eye icon) and deleting it. (This will also delete all other filters, so you could just remove one if you needed.) You can also more specifically filter messages using Filter, but for the purposes of the question (blocking messages from a certain file), it does the job the fastest and the best.
For more information, see: https://developers.google.com/web/tools/chrome-devtools/console/reference#filter. This also includes information on the other ways to more specifically filter console messages.
At the top of your console is a filter box. Here you can do
plain text search
regular expression search such as /\d+\s\d+/
attribute, such as url:pagead
negate any of the above with prefix -.
combine (AND) any of the above with space
For example -url:pagead will filter out all messages that have pagead in the url. There are two other attributes context: and source:, but I don't know what they do.
For example, def anon will show only messages that contain both def and anon (not necessarily together).
I have not found any way to OR two expressions (UNION).
see documentation

Is continuously previewing Google App Maker apps the only way to update/debug them?

I've used App Maker for several small projects since the launch of the EAP. Coming from TDD with PHP and hot reloading with Node, I'm frustrated by how long it takes to see changes and debug. I'm now wondering if I'm missing some critical piece of knowledge, because it seems too tedious.
Here's an example:
I have a form field that's bound to a client-side function. The function returns a calculated number based on the value of other form fields (some of which are also calculated). When I click preview and fill out the form, of course, there's going to be a console error or the calculation will be off. So I tweak the function, hit preview, and try again - repeatedly, until I get the expected output.
Since it takes 10-15 seconds for my preview to render each time, I'm spending a ton of dev time staring at a rotating circle.
I've had some success composing and debugging some of the scripts in another environment (Google Apps Scripts, local IDE, etc), then cutting and pasting them into App Maker. That doesn't work well when the function references App Maker models and widgets though.
Is there something I'm missing here, or is repetitive previewing really the only way to design and debug within App Maker?
A quick and dirty method I use a lot is adding this line to any function I want to look at closely. AppMaker shows a warning for every debugger call so there's no risk of forgetting it in the code as well.
debugger;
Of course, this only works for client-side scripts.

IAR Embedded Workbench breakpoint failure

I've been using IAR embedded workbench for quite some time but there is still one thing I'm unable to wrap my head around. And that's the inconsistency of the breakpoint operation.
I have a quite big project which runs an RTOS (might this affect the problem?) and when I place a breakpoint there is no guarantee the debugger will stop at this breakpoint. Sometimes it does, sometimes it doesn't.
A workaround I found is manually stopping the processor and placing the breakpoint while the processor is paused. But even this has not a 100% success rate.
I'm generating debug information and I'm running in debug mode
Anyone had similar issues or anyone with some ideas?
Try to reduce number of breakpoints that you use (possibly just keep this one that causes problem), also you could use a “Log” breakpoint in order to print a message in the Debug Log window instead of stopping at that point (you may also want to try a different point in the same section of the code). If they don't help, I could say with a high degree of certainty that the debugger does not stop at the breakpoint because it simply does not hit it, and you do not enter that block of code. Consider that in an embedded project (especially when using an RTOS) some conditions are met only at the beginning or upon certain requests in certain conditions so you might want to figure out when the breakpoint is actually hit what conditions were met and what is the new state now.

Tidying up history in RStudio to document an analysis

I am doing some analysis in Rstudio and at the moment - as I am refreshing my knowledge of R after a few decades away from S - this involves writing lots of one-liner statements which operate on test datasets, and then inspecting/testing the output, then finally scaling it up when I've checked all the little bits work.
So my history is full of syntax errors and similar. But I am making progress every time I work, and each time I work there are statements that worked, that I want to save, in order to document the bits of the session that are worth saving. Is there any established way of extracting these from my history for re-use, in RStudio? Should I just scroll through after each session and copy and paste them into a textfile with a word processor? Or is there something more clever than that that I can do, staying within RStudio?
The easiest way to see your history, is to hit Ctrl-4 and that will bring up the history window. You can copy this to source and then edit it, or where ever. However, for what you are doing it is probably better to edit directly into a source window.
The setup I use is to have a script window open, and use ctrl-enter to run the current line.
To make this easier go into Tools>Options>Code Editing and ensure that "focus console after executing from source" is unchecked and your cursor will stay in the script after the line is executed.
You can now type your lines and edit them until they do what you want, then move on to the next when it works. Once you get to the end you have built up your script already. Also since your "history" is just their in front of you, it is much easier to skip back to older lines and rerun or modify them. If you want to run a block of code, simply highlight the block and hit ctrl-enter.
In the history panel in RStudio (top right panel), you can click "send to source" and it will copy the line you have selected over to whatever .R file you have open in the top left panel.

Subversion: "svn update" loses CSS data

Recently, I've noticed strange behavior by Subversion. Occasionally, and seemingly randomly, the "svn up" command will wreak havoc on my CSS files. 99% of the time it works fine, but when it goes bad, it's pretty damn terrible.
Instead of noting a conflict as it should, Subversion appears to be trashing all incoming conflict lines and reporting a successful merge. This results in massively inconvenient manual merges because the incoming changes effectively disappear unless they're manually placed back into the file.
I would have believed this was a case of user error, but I just watched it happen. We have two designers that frequently work on the same CSS files, but both are familiar and proficient with conflict resolution.
As near as can figure, this happens when both designers have a large number of changes to check in and one beats the other to the punch. Is it possible that this is somehow confusing SVN's merging algorithm?
Any experience or helpful anecdotes dealing with this type of behavior from SVN are welcome.
If you can find a diff/merge program that's better at detecting the minimal changes in files of this structure, use the -diff-cmd option to svn update to invoke it.
It may be tedious but you can check the changes in the CSS file by using
svn diff -r 100:101 filename/url
for example and stepping back from your HEAD revision. This should show what changes were made , at what revision and by whom. It sounds like a merging issue I've had before but unfortunately I found myself resolving it by looking at previous revisions and merging them manually too.

Resources