How to run automated GUI tests on a remote headless ESXi Virtual Machine? - automated-tests

I'm trying to setup automated GUI tests in ESXi Virtual Machines using TestComplete. The problem, as I understand it, is that when no remote desktop connection is made to the ESXi virtual machine, then it is impossible for TestComplete to perform screen captures and therefore automate the GUI testing. As far as I understand it, this is due to the fact that Windows does not generate any user interface when nobody is viewing it.
I'm sure other have experienced this problem. How did you solve it ? Are you using a third party computer which automatically launch remote desktop connections prior to running the tests ?
Would it be possible to launch a remote desktop from a head-less virtual machine to another to fake somebody viewing ?
Any other smarter solutions I haven't thought about ?

You should be able to log in to Windows on the VM's console using the vSphere client, then close vSphere, and Windows will still believe the user is viewing the console. Simple as that. :)
So there shouldn't be a need to involve remote desktop in the mix.
As long as your tests then run as that logged-in Windows user, you should be fine.
This technique has always worked like a charm for me with certain Watir, Selenium, and MS UI Automation tests that depend on having an interactive desktop.
If you need to reboot the VM automatically before/during the test, instead of logging in manually in the vSphere client, you can make Windows log in as an arbitrary user automatically - check the "control userpasswords2" command, or you can use the Sysinternals app "Autologin":
http://technet.microsoft.com/en-us/sysinternals/bb963905
Only catch with this technique is that you need to be able to launch your tests while not viewing the console on the VM, but it sounds like you've already taken care of that?
If you need a solution for launching your tests remotely, I highly recommend using Jenkins or Hudson to kick off tests/collect results from the VM. Jenkins has changed my life in this regard.

You may consider using the Network Suites functionality of TestComplete:
http://smartbear.com/support/viewarticle/16849/
It can open Remote Desktop connections on its own, control tests on remote PCs, and pull the logs back to the "master" project. This feature is designed to be used for distributed tests, and looks like it's just what you need.
As for opening RDP to a head-less VM, it should not be a problem - it's up to Windows to "think" about this. You just open RDP and it works even if there is no monitor attached to the remote PC/VM.
I hope this helps,
Alex

You can always use VNC with checking the option "Do nothing" when disconnecting viewer. This way you'll trick windows to generate the image.

Related

Every time I use expo client, my http network fails to work

TCP connections work fine as I am able to converse with someone over zoom and teamviewer. However, whenever I attempt to access another webpage, I get a network error. Google seems to work fine for some reason but any webpage I go to listed by Google fails to connect. The only way I can open up my http connections is by ending a task called "init" inside of task manager. This shuts down my vscode as well as my ubuntu terminals I have running. If someone knows the solution please do tell. It's really annoying having to close out my vscode and terminals as well as my local servers to look up information and debug.
I found a fix to this issue.
So I was running a Windows Subsystem for Linux and my Windows Build was outdated as well as WSL. When I updated Windows and upgraded to WSL2, my issue was resolved and I don't seem to get any more network errors.

Remote access to DOS command line from Unix

I'm looking for a way to develop some Unix scripts that will connect to a DOS box (Windows server 2012) and interactively execute DOS commands.
I'm comfortable with the Unix side (I'll almost certainly use Expect), but I'm "Windows illiterate" and am unable to find anything about connecting to Windows's DOS command line in this fashion. Is this even possible to do?
(FWIW, this is to enable us to control Tableau Server using its 'tabcmd' DOS command suite from our existing Linux environment.)
UPDATE 1:
I think another way of asking the question is: does Windows provide anything that is the equivalent of the Unix "remote shell", accessible from Unix?
There are no built-in tools to do this, although PsExec is a utility that can almost do what you want. PsTools are not a built-in, but are hosted on Technet. Some things to keep in mind:
PsExec works by actually remotely copying a file over to the Windows System32 folder (copying is one thing that is builtin ;)
Windows uses Kerberos for authentication. This depends on the computer you are running the command from being on the same Active Directory as the computer you want to control, with access set up from that side. Linux can use kerberos through third-party AD integration tools (like Quest Authentication Services, a commercial product), or also Centrify but there are no built-in tools that do it.
Psexec is not encrypted, meaning if you send commands containing sensitive data, they can be seen (though not the authentication part of it).
PsExec is obviously still a Windows utility. I have been able to get it to work using Wine, but only for a local account and after some tweaking with matching hostnames and stuff. It's possible that if you have authenticated using QAS or Centrify that your wine command will somehow pick it up, but I haven't tested it; I don't work where we use AD anymore.
Maybe the biggest problem is the difference of philosophy between the two communities. Windows doesn't use command line execution very often for remote administration. There is more focus on using your local utilities on a remote system (i.e., you can load a remote registry hive from RegEdit or browse the file system of a remote system using your local Windows Explorer program).
Overall, I think Keith's solution is actually the best, and the most straightforward.

Selenium performance with .NET and Jenkins - How to profile and improve it?

I have an ASP.NET website that I want to test with Selenium. I want to setup a Jenkins instance on a "staging" virtual machine, to run the tests automatically.
The problem is that the tests run very slowly - several times slower than on my development machine. A single, simple test can take more than 2-3 minutes.
I'd like to know if this is to be expected, if there are any obvious pitfalls for such a testing setup as mine, and if there's anything I can do to profile and improve the performance of the test suite.
Info:
Tests run on a 2.7GHz 2GB Ram virtual machine with Windows 7 64bit.
My dev machine is similar, but with a 32-bit Win installation.
The following is done by Jenkins:
The website gets checked out from source control, and configured with a custom web.config. The main differences are that it's compiled in release mode and it connects to a database on a different machine(also a virtual machine, on the same server).
IIS is monitoring the website's directory and automatically reloads changes.
The following command is run (directories sanitized): nunit-console Selenium-Project-Dir /labels
The tests are run on the Chrome webdriver.
The selenium project uses NUnit and WebRunner.
Driver instances are created once - before all tests, in a [SetUp] attribute inside a [SetUpFixture] class. They are deleted once, in the class's [TearDown] attribute.
A sample test looks like this:
[Test, Combinatorial]
public void AnExistingUserCanLogin(
[ValueSource(typeof(Drivers), "Good")]
IWebDriver driver)
{
// This function clicks on some buttons and fills in some forms.
LoginUser(driver);
// Make sure the user is now logged in
Assert.IsTrue(driver.ElementIsPresent(By.ClassName("imgUserAvatar")));
Assert.IsTrue(driver.ElementIsPresent(By.CssSelector("a.my-profile")));
Assert.IsTrue(driver.ElementIsPresent(By.CssSelector("a.logout")));
}
(The "Drivers" class contains lazily-instances webdriver instances of FF, IE, Chrome. You can guess what the "Good" static property of the class instances)
The fact that it is a VM should not be a problem, and RAM sounds like it is good. What OS is being run though? Some of the Windows OS have significant speed differences between the different bit versions (because of demands on hardware). This is especially noticeable in Vista but I have noticed a difference between Windows 7 and Server 2003.
I have found that there is a huge problem if the machine running the 64bit VM is a 32bit machine. It often seems to work well for a while and then as it continues to run tests will slow down. Which should be a duh but the main thing is that there may be too many VMs being run from the same place. Other programs will be affected by this and that can help troubleshoot if your VMs are on a company farm.
The other factor that has made a significant difference is if you are running the tests against different browsers. IE8 and IE9 will take the same selenium command and run them at different speeds. I don't know why this is, I just know that I have seen it. Make sure the staging machine has the current version of Chrome (or at least the same as your dev machine).
How the staging machine is networked to the database might be making a difference. This seems like a very small chance but if there is firewall crap between the two VMs that can have significant impact on time.
The only other thing I can think of that might be changing the length of time is if there are other programs being executed on the on the staging machine at the same time. Checking the CPU usage might helpful. Personally, I have noticed that my VM uses a lot more of the CPU than my personal machine. If this is the case the only solution I have found so far is to give the VM more processing power or to only run the test and nothing else.
Good luck!

Why don't QLocalSocket/Server connections work when one process was invoked by an NSIS installer?

I have an NSIS installer that installs my Qt application. At the end of the install process, the installer gives the user the option to launch the application immediately.
My application uses QLocalSocket/QLocalServer to talk to other local instances of the application. (They talk to each other basically just to ensure that there's only one instance of the app running at a time.) However, on Vista, if one of the instances was started up by the installer, then other instances cannot talk to that instance unless they were also started by the installer (or uninstaller, interestingly).
The NSIS installer launches the app with the Exec command. The client tries to connect to the server through QLocalSocket::connectToServer, which fails with the error "QLocalSocket::connectToServer: Unknown error 5".
Can anyone explain this? What's the best way to work around it?
If 5 is a windows error code, it would mean access denied. Is there a way for you to change the security on this server (You would need to access the native pipe handle)?
The finish page run option has more issues than just this, the new process gets the wrong HKCU and user profile etc.
I would recommend just disabling the run checkbox on the finish page. (This issue goes all the way back to win2000 when RunAs was added)
If you really really want this run checkbox, you can use the UAC plugin, it will allow you to start a child process as the "correct" user.
Finally figured this out. The installer was running as admin (the install script said "RequestExecutionLevel admin"), and apparently it launched my app with those elevated permissions, which meant that other instances of my app running with user-level permissions couldn't connect to it. QLocalSocket/Server uses named pipes on windows, so I figure this is a windows security feature. I'm planning to work around this by using the UAC NSIS plugin, which I believe lets you run a process with user-level permissions.

Running Visual Studio in Parallels for mac - problem with debugging sites sitting in os x drive

I've installed parallels desktop on my MacBook to be able to run Visual Studio 2008 in a XP installation. Everything works great except when I decided to put my websites in my sites folder in the os x file system (Which by default automatically happens because the My Documents folder is mapped to the Mac's Documents folder, and I'd rather put my code there so that both OS's can easily access it.).
When trying to build or debug I get this error:
Failed to start monitoring changes to 'Z:\xxx...'
How do I get it so that I can get it to work under Parallels, from the shared drive?
Parallels uses network drives to simulate folders on OS X, and Windows can't monitor changes to network drives, so if you do this directly, it'll be broken.
If you want to keep them in sync though, use Live Mesh (http://www.mesh.com) and install it on both the host and guest. A little roundabout, but it'll make it so both copies are maintained (and Live Mesh is handy for other things too)
I recently flipped over to putting my source code onto my Mac volume, so I could use Time Machine to back it up and immediately got this same problem with my ASP.NET app. Other, procedural applications, built just fine, by the way.
I tried all sorts of things, including using Samba on the Mac side to share the directory, which led into the "too many BIOS commands" error described elsewhere. Unfortunately for me, the Registry hacks to fix that problem never worked for some reason.
I finally found another solution that avoids Samba and just uses the regular Parallels Shared Folders. It too is a Registry hack, but this one simply turns off file change monitoring for ASP.NET. It is a bit heavy-handed, but gets my builds to work again.
The reference for this change is here:
http://support.microsoft.com/kb/911272
The downside to this approach, I am finding, is that you need to be more deliberate about recompiling, or restarting the web server, as changes during development don't just magically appear anymore. I am still deciding whether that is a useful tradeoff.
UPDATE: After several days of this, development was just too difficult and, sadly, what I reverted to was keeping my source inside the Parallels virtual disk. To enable Time Machine backups and Spotlight searches, I used a lightweight MS utility called SyncToy to push stuff out of Parallels and out to my Mac drive several times a day. Despite the high hack factor, it is working well.
I know this isnt strictly a solution but VMware fusion is superior when it comes to shared drive space on a virtual machine. Its what i currently use and hasn't let me down thus far...
People always give me odd looks when they see visual studio on my mac :P
Try moving the project on to the VMs C drive. Its not an ideal situation, but you can access the VMs C drive from OS X.
I have a similar problem with a php site that uses an MS Access database (its a clients system). I have alias's that point to the php site on the VM so that I can still do all of my coding in OS X. To do this I created a network share on the VM and then connected to it from OS X. Once connected make the alias's. If the network drive is not open and you open a file in OS X it will try to reconnect. It means the VM will need to be running to get to the files, but this isn't normally a problem since the VM is hosting the site anyways.
.NET has funny issues trying to debug the objects on a network drive.
make sure that you have full trust on your local network between your Mac and XP install.
Check out: http://msdn.microsoft.com/en-us/library/aa302361.aspx
If at the end of that research, I"m afraid you will have to look into the option of keeping it on the VMDisk and moving it when you need it.
I see a similar problem on my machine connected to the windows domain. My documents is mapped to a network share and I can't debug|run|etc. I had to eventually move to my local disk for debugging.
I definately recommend Live Mesh as a way to keep directories in sync. Just keep the VM's directory in sync with the Mac's directory.
Or use SVN to hold copies in both machines and do commit/update as appropriate. That way you get versioning, history and if your project grows bigger, you can share with other devs.
I know dropbox also has history and sharing, but not check in/check out/conflicts and all the other advantages of a real source control.
Oh, if you have money you can also go for TFS. I would but it is just too expensive :)

Resources