Remote access to DOS command line from Unix - unix

I'm looking for a way to develop some Unix scripts that will connect to a DOS box (Windows server 2012) and interactively execute DOS commands.
I'm comfortable with the Unix side (I'll almost certainly use Expect), but I'm "Windows illiterate" and am unable to find anything about connecting to Windows's DOS command line in this fashion. Is this even possible to do?
(FWIW, this is to enable us to control Tableau Server using its 'tabcmd' DOS command suite from our existing Linux environment.)
UPDATE 1:
I think another way of asking the question is: does Windows provide anything that is the equivalent of the Unix "remote shell", accessible from Unix?

There are no built-in tools to do this, although PsExec is a utility that can almost do what you want. PsTools are not a built-in, but are hosted on Technet. Some things to keep in mind:
PsExec works by actually remotely copying a file over to the Windows System32 folder (copying is one thing that is builtin ;)
Windows uses Kerberos for authentication. This depends on the computer you are running the command from being on the same Active Directory as the computer you want to control, with access set up from that side. Linux can use kerberos through third-party AD integration tools (like Quest Authentication Services, a commercial product), or also Centrify but there are no built-in tools that do it.
Psexec is not encrypted, meaning if you send commands containing sensitive data, they can be seen (though not the authentication part of it).
PsExec is obviously still a Windows utility. I have been able to get it to work using Wine, but only for a local account and after some tweaking with matching hostnames and stuff. It's possible that if you have authenticated using QAS or Centrify that your wine command will somehow pick it up, but I haven't tested it; I don't work where we use AD anymore.
Maybe the biggest problem is the difference of philosophy between the two communities. Windows doesn't use command line execution very often for remote administration. There is more focus on using your local utilities on a remote system (i.e., you can load a remote registry hive from RegEdit or browse the file system of a remote system using your local Windows Explorer program).
Overall, I think Keith's solution is actually the best, and the most straightforward.

Related

Using SSH in Cygwin - Invoking local programs

I'm working on an assignment where I need to remote connect into my company's UNIX boxes and parse out a particular set of log entires. I've figured out a method for doing so with Grep and the -C flag, but the version of UNIX installed on these machines doesn't support that functionality. One alternative I've considered is doing the work on my local machine through Cygwin and using the local version of Grep to handle this task. However, these logs are especially larges, upwards of 50 megabytes and the connection to the boxes are very slow so it would take several hours to complete the downloads.
My main question, is it possible to remote connect through SSH to a remote server, but be able to invoke the locally installed versions of certain programs? For example, if I SSH into the server, can I make use of the local version of Grep instead of the remote system's version of Grep?
I've attempted to do something similar using Awk and Sed but I haven't had much success. At this point, aside from a long period of downloading, I'm not sure what other options I have. Any advice? Thanks in advance. :)
Even if you could use a remote file with a local application, you'd still be downloading the entirety of the log files - ssh allows output/input to pass between boxes, but you're not actually running your local grep on the local machine - it'd be the remote machine sending its file to your local grep.
One alternative is to gzip the logfiles before sending them through ssh,e.g.
ssh user#remotebox 'gzip -9 - logfile'|gzcat -|grep whatever
You'd still be sending the entirety of the log files, but log files tend to compress very well, so you'd only be sending a small fraction of the original data (e.g. a couple megs v.s. 50 uncompressed).
Or, in the alternative, you could try compiling gnu grep from source on the remote machines, assuming there's an appropriate compiler toolchain on those machines.

How to run automated GUI tests on a remote headless ESXi Virtual Machine?

I'm trying to setup automated GUI tests in ESXi Virtual Machines using TestComplete. The problem, as I understand it, is that when no remote desktop connection is made to the ESXi virtual machine, then it is impossible for TestComplete to perform screen captures and therefore automate the GUI testing. As far as I understand it, this is due to the fact that Windows does not generate any user interface when nobody is viewing it.
I'm sure other have experienced this problem. How did you solve it ? Are you using a third party computer which automatically launch remote desktop connections prior to running the tests ?
Would it be possible to launch a remote desktop from a head-less virtual machine to another to fake somebody viewing ?
Any other smarter solutions I haven't thought about ?
You should be able to log in to Windows on the VM's console using the vSphere client, then close vSphere, and Windows will still believe the user is viewing the console. Simple as that. :)
So there shouldn't be a need to involve remote desktop in the mix.
As long as your tests then run as that logged-in Windows user, you should be fine.
This technique has always worked like a charm for me with certain Watir, Selenium, and MS UI Automation tests that depend on having an interactive desktop.
If you need to reboot the VM automatically before/during the test, instead of logging in manually in the vSphere client, you can make Windows log in as an arbitrary user automatically - check the "control userpasswords2" command, or you can use the Sysinternals app "Autologin":
http://technet.microsoft.com/en-us/sysinternals/bb963905
Only catch with this technique is that you need to be able to launch your tests while not viewing the console on the VM, but it sounds like you've already taken care of that?
If you need a solution for launching your tests remotely, I highly recommend using Jenkins or Hudson to kick off tests/collect results from the VM. Jenkins has changed my life in this regard.
You may consider using the Network Suites functionality of TestComplete:
http://smartbear.com/support/viewarticle/16849/
It can open Remote Desktop connections on its own, control tests on remote PCs, and pull the logs back to the "master" project. This feature is designed to be used for distributed tests, and looks like it's just what you need.
As for opening RDP to a head-less VM, it should not be a problem - it's up to Windows to "think" about this. You just open RDP and it works even if there is no monitor attached to the remote PC/VM.
I hope this helps,
Alex
You can always use VNC with checking the option "Do nothing" when disconnecting viewer. This way you'll trick windows to generate the image.

OpenCL development platform?

I am developing OpenCL code on a linux cluster through SSH -
are there any tools that would make this process easier, i.e.
something like NVIDIA Parallel Nsight for OpenCL ?
No there is no such tool, though you might try developing your code using ordinary computer and post production versions there..
If the computer where you perform development is also running Linux, you can easily mount a remote folder as local. In a Gnome environment, open Nautilus (the file manager), click File => Connect to server, chose SSH, fill the required parameters, and you have a remote folder as local.
You can then use any IDE you want to develop code, and maybe perform simple runs, tests and debugs if the OpenCL tools (compiler, debugger) you're using remotely are also installed locally. However, To compile and properly run the code on the cluster, you need to use the ssh client on the command line.

Why don't QLocalSocket/Server connections work when one process was invoked by an NSIS installer?

I have an NSIS installer that installs my Qt application. At the end of the install process, the installer gives the user the option to launch the application immediately.
My application uses QLocalSocket/QLocalServer to talk to other local instances of the application. (They talk to each other basically just to ensure that there's only one instance of the app running at a time.) However, on Vista, if one of the instances was started up by the installer, then other instances cannot talk to that instance unless they were also started by the installer (or uninstaller, interestingly).
The NSIS installer launches the app with the Exec command. The client tries to connect to the server through QLocalSocket::connectToServer, which fails with the error "QLocalSocket::connectToServer: Unknown error 5".
Can anyone explain this? What's the best way to work around it?
If 5 is a windows error code, it would mean access denied. Is there a way for you to change the security on this server (You would need to access the native pipe handle)?
The finish page run option has more issues than just this, the new process gets the wrong HKCU and user profile etc.
I would recommend just disabling the run checkbox on the finish page. (This issue goes all the way back to win2000 when RunAs was added)
If you really really want this run checkbox, you can use the UAC plugin, it will allow you to start a child process as the "correct" user.
Finally figured this out. The installer was running as admin (the install script said "RequestExecutionLevel admin"), and apparently it launched my app with those elevated permissions, which meant that other instances of my app running with user-level permissions couldn't connect to it. QLocalSocket/Server uses named pipes on windows, so I figure this is a windows security feature. I'm planning to work around this by using the UAC NSIS plugin, which I believe lets you run a process with user-level permissions.

Running Visual Studio in Parallels for mac - problem with debugging sites sitting in os x drive

I've installed parallels desktop on my MacBook to be able to run Visual Studio 2008 in a XP installation. Everything works great except when I decided to put my websites in my sites folder in the os x file system (Which by default automatically happens because the My Documents folder is mapped to the Mac's Documents folder, and I'd rather put my code there so that both OS's can easily access it.).
When trying to build or debug I get this error:
Failed to start monitoring changes to 'Z:\xxx...'
How do I get it so that I can get it to work under Parallels, from the shared drive?
Parallels uses network drives to simulate folders on OS X, and Windows can't monitor changes to network drives, so if you do this directly, it'll be broken.
If you want to keep them in sync though, use Live Mesh (http://www.mesh.com) and install it on both the host and guest. A little roundabout, but it'll make it so both copies are maintained (and Live Mesh is handy for other things too)
I recently flipped over to putting my source code onto my Mac volume, so I could use Time Machine to back it up and immediately got this same problem with my ASP.NET app. Other, procedural applications, built just fine, by the way.
I tried all sorts of things, including using Samba on the Mac side to share the directory, which led into the "too many BIOS commands" error described elsewhere. Unfortunately for me, the Registry hacks to fix that problem never worked for some reason.
I finally found another solution that avoids Samba and just uses the regular Parallels Shared Folders. It too is a Registry hack, but this one simply turns off file change monitoring for ASP.NET. It is a bit heavy-handed, but gets my builds to work again.
The reference for this change is here:
http://support.microsoft.com/kb/911272
The downside to this approach, I am finding, is that you need to be more deliberate about recompiling, or restarting the web server, as changes during development don't just magically appear anymore. I am still deciding whether that is a useful tradeoff.
UPDATE: After several days of this, development was just too difficult and, sadly, what I reverted to was keeping my source inside the Parallels virtual disk. To enable Time Machine backups and Spotlight searches, I used a lightweight MS utility called SyncToy to push stuff out of Parallels and out to my Mac drive several times a day. Despite the high hack factor, it is working well.
I know this isnt strictly a solution but VMware fusion is superior when it comes to shared drive space on a virtual machine. Its what i currently use and hasn't let me down thus far...
People always give me odd looks when they see visual studio on my mac :P
Try moving the project on to the VMs C drive. Its not an ideal situation, but you can access the VMs C drive from OS X.
I have a similar problem with a php site that uses an MS Access database (its a clients system). I have alias's that point to the php site on the VM so that I can still do all of my coding in OS X. To do this I created a network share on the VM and then connected to it from OS X. Once connected make the alias's. If the network drive is not open and you open a file in OS X it will try to reconnect. It means the VM will need to be running to get to the files, but this isn't normally a problem since the VM is hosting the site anyways.
.NET has funny issues trying to debug the objects on a network drive.
make sure that you have full trust on your local network between your Mac and XP install.
Check out: http://msdn.microsoft.com/en-us/library/aa302361.aspx
If at the end of that research, I"m afraid you will have to look into the option of keeping it on the VMDisk and moving it when you need it.
I see a similar problem on my machine connected to the windows domain. My documents is mapped to a network share and I can't debug|run|etc. I had to eventually move to my local disk for debugging.
I definately recommend Live Mesh as a way to keep directories in sync. Just keep the VM's directory in sync with the Mac's directory.
Or use SVN to hold copies in both machines and do commit/update as appropriate. That way you get versioning, history and if your project grows bigger, you can share with other devs.
I know dropbox also has history and sharing, but not check in/check out/conflicts and all the other advantages of a real source control.
Oh, if you have money you can also go for TFS. I would but it is just too expensive :)

Resources