TFS 2017 TestExecution.zip inaccessible to test agent machine - automated-tests

I'm very confused by the behavior of the TFS 2017u2 Deploy Test Agent task when operating in offline mode. The task appears to fail when looking on the test agent machine for a path local to the deployment agent. Either that or it fails trying to connect via admin share.
Using the Deploy Test Agent task in Release Management, the Test Agent will install successfully from a separate UNC share. However, the task will then detect the DTAAgentExecutionService and DTAExecutionHost are not present/running. It looks like the RemoteDeployerService then reaches back from the Test Agent workstation to the Deployment Agent machine to grab a copy of the TestExecution.zip archive. This elicits warnings in the log: "Failed to connect to the path \\******** with the user....", "System error 53 has occurred," and "The network path was not found." Finally, the effort fails with:
[error]Error occured on '********:5986'. Details : 'Test agent source path 'D:\agents\DA_1\_work\_tasks\DeployVisualStudioTestAgent_52a38a6a-1517-41d7-96cc-73ee0c60d2b6\2.1.8\TestExecution.zip' is not accessible to the test machine. Please check if the file exists and that test machine has access to that machine'.
That path is clearly referencing the deployment agent machine, however, so I don't know why the test agent would be looking for it in a local path (d:\ instead of \\). To be sure, I manually logged into the test agent workstation with the same account configured for the RM task and I can access that folder and archive remotely via \\d$\ share without a problem.
Is the test agent trying to reach back to the deployment agent?
I expected the deployment agent machine to initiate the installation and monitor remotely, pushing content to the test agent workstation as required. In our production space firewalls allow traffic originating from the deployment agent to reach the test agent but not the other way around.
If not, what is it doing? Where does it expect to find TestExecution.zip? I tried putting a copy in the same UNC folder as the TestAgent.zip installer but that did nothing.

Related

Kaa - Issue with Raspberry Pi example app

I am attempting to follow this tutorial: http://docs.kaaproject.org/display/KAA/Raspberry+Pi
When I run the final commands:
tar -zxf notification_demo.tar.gz
cd CppNotificationDemo/
./build.sh deploy
After it finishes building it displays:
Press Enter to subscribe to optional topics
I press Enter, then it displays:
Press Enter to exit
I do not press Enter, and after a couple minutes this error message shows up
[client_1][2017-Jan-19 11:29:22.287762][0x755ff450][warning][HttpClient.cpp:41]:Transport error occurred: Connection timed out
[client_1][2017-Jan-19 11:29:22.313916][0x755ff450][warning][AbstractHttpChannel.cpp:103]: Channel [default_bootstrap_channel] failed to connect 130.113.109.160:9889: Connection timed out
[client_1][2017-Jan-19 11:29:22.353513][0x755ff450][warning][AbstractHttpChannel.cpp:124]: Channel [default_bootstrap_channel] detected 'CURRENT_BOOTSTRAP_SERVER_NA' failover for TransportConnectionInfo{ server: 'BOOTSTRAP', protocol: 'TransportProtocolId{ id: 0xfb9a3cf0, version: 1 }', accessPointId: -1835393002, isFailed: 'false' }
[client_1][2017-Jan-19 11:29:22.354396][0x755ff450][warning][KaaChannelManager.cpp:157]: No Bootstrap services are accessible for TransportProtocolId{ id: 0xfb9a3cf0, version: 1 }. Processing failover...
[client_1][2017-Jan-19 11:29:22.355018][0x755ff450][warning][KaaChannelManager.cpp:148]: Attempt to reconnect to first Bootstrap service will be made in 5 seconds
What does this error message mean, and how do I solve this?
That message usually means the application cannot connect to the Kaa Sandbox. There might be several issues with that and you should try all of them until it start working.
Ensure you run the application from the same PC host the Kaa Sandbox is running. In this case, with the default Sandbox configuration the application should be able to normally access all the necessary Kaa services located on the Kaa Sandbox with no additional configuration.
If you need to run the application remotely (i.e. from another host PC and the Kaa Sandbox virtual machine is accessible through the local network), you need to change the Kaa host configuration on the Administration UI, Manage page to the real IP address of the PC host the Kaa Sandbox is running on. Then, you will need to re-generate Kaa SDK, download it and use during the application build.
If neither of this works for you, the network and (or) other configuration is incorrect and need investigation. Please describe your network topology, all the PC hosts involved, what steps did you go after downloading of the Kaa Sandbox and how did you build the application. We will analyse this data and try identify the issue root cause.

AWS CodeDeploy vs Windows 2016 in ASG

I use AWS CodeDeploy to deploy builds from GitHub to EC2 instances in AutoScaling Group.
It's working fine for Windows 2012 R2 with all Deployment configurations.
But for Windows 2016 it totally fails on "OneAtTime" deploy;
During "AllAtOnce" deploy only one or two instances deployed successfully, all other fails.
In the logfile on agent this suspicious message is present:
ERROR [codedeploy-agent(1104)]: CodeDeploy Instance Agent Service: CodeDeploy Instance Agent Service: error during start or run: Errno::ETIMEDOUT
- A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. - connect(2)
All policies, roles, software, builds and other stuff are the same, I even tested this on brand new AWS account.
Does anybody faced such behaviour?
I ran into the same problem, but during my investigation, I found out that server's route table had wrong routes for 169.254.169.254 network (there was specified the gateway from the network where my template was captured), so that it couldn't read instance metadata.
From the above error it looks like the agent isn't able to talk to CodeDeploy endpoint after instance starts up. Please check if the routing tables and other proxy related settings are set up correctly. Also if you do not have it already, you can turn on the debug log by setting :verbose to true in the agent config and restart the agent. This would help debug the issue better.

Why does sbt try to publish packages anonymous

We are running an artifactory server that we use for our artifacts. When I tail the access log I notice that everytime an sbt project publishes, it will first try to do this anonymous (which is disallowed in our configuration).
In the access log there will always show up two lines with denied deploys. The curious thing here is that the ip adress is also different. It turns out that the ip address that succeeds is the ip adrress of the slave (which I expect to publish) and that the other ip address is from the controller of the build server (Jenkins in our case).
I am not very familiar with sbt, but in our sbt configurations there is a part that appends the correct credentials from a file
credentials += Credentials("path" / "to" / ".credentials")
Is it possible that the credentials sequence contains an anonymous entry and that it will try that one first?
And does anyone know why it tries to publish from the build server ip, where the job is not running?
BTW the publish does work, but it annoys me that it is trying to publish anonymous

How do I publish ASP.NET MVC3 application to azure using "Publish -> "Web Deploy"? (getting error: Web Deployment task failed)

I'm getting this error when trying to deploy to azure using "Publish" from my MVC project. I checked Web Management Service and Web Deployment Agent Service and there are both running. Credentials I'm using are valid since I get access to server instance through remote desktop
I've also checked values for the properties VS filled for me on the "Publish Web" dialog and they are correct:
Service Url: https://*.cloudapp.net:8172/MsDeploy.axd (where * is my hosted service ID)
Site/Application: NLSubscriber.Web_IN_0_Web
Error 66 Web deployment task failed.(Could not connect to the destination computer (".cloudapp.net"). On the destination computer, make sure that Web Deploy is installed and that the required process ("The Web Management Service") is started.)
This error indicates that you cannot connect to the server. Make sure the service URL is correct, firewall and network settings on this computer and on the server computer are configured properly, and the appropriate services have been started on the server.
Error details:
Could not connect to the destination computer (".cloudapp.net"). On the destination computer, make sure that Web Deploy is installed and that the required process ("The Web Management Service") is started.
Unable to connect to the remote server
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond xxx.xxx.xxx.xxx:8172
0 0 NLSubscriber.Web
I've checked that the service url is reachable (and not blocked by
the firewall), by directly accessing it through my browser typing
just the same address VS is showing and after warning me about the
not trusted certificated, it prompted me for user/password, which I
provided and accessed it with no problem at all.
I've opened Remote Desktop and checked site name from IIS manager,
which is the same as pre-inputted by VS: NLSubscriber.Web_IN_0_Web.
What am I missing?
It seems Arwind has solved this issue on http://social.msdn.microsoft.com/Forums/en-US/windowsazuredevelopment/thread/dfe98f28-b211-4124-a505-0c664f48b3e5/
Please check the link and see whether it helps.
Best Regards,
Ming Xu.

GlassFish 3.1.2 - validate-dcom fails with "The remote file, C: doesn't exist" (Centralized Administration with Windows DCOM)

OS - Windows 2008 server R2 X 2 (firewall disabled on both machines)
I wish to take advantage of GlassFish 3.1.2 Windows DCOM feature to setup communication between GlassFish DAS and a remote node. I've successfully followed Byron Nevins instructions on using GlassFish 3.1.2 DCOM Configuration Utility
However I'm having an issue validating DCOM following the instructions in GlassFish 3.1.2 Guide - 2 Enabling Centralized Administration of GlassFish Server Instances
When I run command validate-dcom --passwordfile C:/Sun/AppServer/password.txt -v 192.168.0.80 I get the following output:
asadmin> validate-dcom --passwordfile C:/Sun/AppServer/password.txt -v 192.168.0.80
remote failure:
Successfully verified that the host, 192.168.0.80, is not the local machine as required.
Successfully resolved host name to: /192.168.0.80
Successfully connected to DCOM Port at port 135 on host 192.168.0.80.
Successfully connected to NetBIOS Session Service at port 139 on host 192.168.0.80.
Successfully connected to Windows Shares at port 445 on host 192.168.0.80.
The remote file, C: doesn't exist on 192.168.0.80 : Logon failure: unknown user name or bad password.
Password file, password.txt, contains a single entry:
AS_ADMIN_WINDOWSPASSWORD=my-windows-password
I have double-checked I can successfully login with my windows password on the remote machine 192.168.0.80. I've also tried this test with two Windows XP professional machines and get the same error.
Also performed this operation by creating a New Node in Admin Console, got the same error:
Cannot figure what is going wrong or what I may be missing
Thanks in advance
I have had similar issues while setting up the new production env. at work last friday, and could not find any useful information on the interwebs, except people encountering the same issue, some with comments as fresh as the day I was looking it up.
So after a rather excessive amount of painful, in-depth debugging, I was able to figure out a few things:
You must explicitly specifiy the local windows user you create for the purpose of running glassfish in both the add-node dialog, and the validate-dcom subcommand (option -w), else it will either default to 'admin' or the user the DAS is running as.
There is a bug in validate-dcom that causes it to ignore whatever you specify as the test directory. No matter what you do it will always use C:\, and result in "access-denied".
The documentation omits another registry key that must be given access to in order for WMI to work
Regarding the first issue, you will most likely encounter it if your nodes are not part of a domain or you are using a local account. Windows NT6+ has a new default security policy that prevents local users from elevating privileges over the network, which causes that test to fail, necessarily, seeing how writing to the root of a system drive not something one can do without elevation.
I previously blogged about it for someone to stumble upon it if needed:
http://www.raptorized.com/2008/08/19/access-administrative-shares-on-server-2008vista/
The gist of it is that you have to navigate to the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
and create a new DWORD named LocalAccountTokenFilterPolicy with a value of 1.
There is no need to reboot, the first, broken test should pass. However you will then see an error about being unable to connect to WMI, and it will fail again.
To remedy this, you must also take ownership and grant your local service account user full control over the following registry key, in addition to the other ones described in the HA Administration Guide:
HKEY_CLASSES_ROOT\CLSID\{76A64158-CB41-11D1-8B02-00600806D9B6}
Afterwards, validate-dcom should report success and you will be able to add it as a node, and create instances on it.
I hope this helps, because the seeming lack of activity from Oracle on that issue was infuriating.
I am also less than pleased by the hackish, ugly, insecure nature of the DCOM support in Glassfish 3 :(

Resources