Why does sbt try to publish packages anonymous - sbt

We are running an artifactory server that we use for our artifacts. When I tail the access log I notice that everytime an sbt project publishes, it will first try to do this anonymous (which is disallowed in our configuration).
In the access log there will always show up two lines with denied deploys. The curious thing here is that the ip adress is also different. It turns out that the ip address that succeeds is the ip adrress of the slave (which I expect to publish) and that the other ip address is from the controller of the build server (Jenkins in our case).
I am not very familiar with sbt, but in our sbt configurations there is a part that appends the correct credentials from a file
credentials += Credentials("path" / "to" / ".credentials")
Is it possible that the credentials sequence contains an anonymous entry and that it will try that one first?
And does anyone know why it tries to publish from the build server ip, where the job is not running?
BTW the publish does work, but it annoys me that it is trying to publish anonymous

Related

Biztalk file is not showing up and cannot be sent

I have set up a BizTalk environment and am trying to follow a tutorial on github: wmmihaa / BizTalk-Server---Developing-Integration-Solutions.
Unfortunately when I have configured the server as the steps say and then when I copy the path that the tutorial says I should copy into send ports it does not show up in file explorer. Then I create the path myself and browse for it (same name as in tutorial) and I proceed to start the created application the text file does not show up in the folder and cannot be copied to the receive location as the tutorial says I should.
I have tried restarting the computer and application and tried to change the path (don't know if I can). Does anyone know why the file might not be showing up?
Yes, you have to create the folder manually, just setting the path in the send port won't create it.
There can be various issues
The file is not even picked up. This can be due to
a. The Reive location not enabled. Fix: Enable the receive location.
b. The file mask on the receive location doesn't match the file. Fix: Either change the filename or the filemask.
c. The BizTalk Host Instance user doesn't have the correct permission on the folder to pick the file up. Fix: Give the BizTalk user Full permissions of the folder
d. The host instance that the receive location is running under is Stopped. Fix: Star the host instance.
The file is picked up but the message suspended. This is caused by the filter/subscription on the send port not matching the Promoted Properties on in the message context. Fix: Fix the send port Filter.
The message is in an active state but not sending. Cause: The host instance that the send port is under is not started. Fix: Start the Host Instance the Send Port runs under.

TFS 2017 TestExecution.zip inaccessible to test agent machine

I'm very confused by the behavior of the TFS 2017u2 Deploy Test Agent task when operating in offline mode. The task appears to fail when looking on the test agent machine for a path local to the deployment agent. Either that or it fails trying to connect via admin share.
Using the Deploy Test Agent task in Release Management, the Test Agent will install successfully from a separate UNC share. However, the task will then detect the DTAAgentExecutionService and DTAExecutionHost are not present/running. It looks like the RemoteDeployerService then reaches back from the Test Agent workstation to the Deployment Agent machine to grab a copy of the TestExecution.zip archive. This elicits warnings in the log: "Failed to connect to the path \\******** with the user....", "System error 53 has occurred," and "The network path was not found." Finally, the effort fails with:
[error]Error occured on '********:5986'. Details : 'Test agent source path 'D:\agents\DA_1\_work\_tasks\DeployVisualStudioTestAgent_52a38a6a-1517-41d7-96cc-73ee0c60d2b6\2.1.8\TestExecution.zip' is not accessible to the test machine. Please check if the file exists and that test machine has access to that machine'.
That path is clearly referencing the deployment agent machine, however, so I don't know why the test agent would be looking for it in a local path (d:\ instead of \\). To be sure, I manually logged into the test agent workstation with the same account configured for the RM task and I can access that folder and archive remotely via \\d$\ share without a problem.
Is the test agent trying to reach back to the deployment agent?
I expected the deployment agent machine to initiate the installation and monitor remotely, pushing content to the test agent workstation as required. In our production space firewalls allow traffic originating from the deployment agent to reach the test agent but not the other way around.
If not, what is it doing? Where does it expect to find TestExecution.zip? I tried putting a copy in the same UNC folder as the TestAgent.zip installer but that did nothing.

Jenkins ssh without password

In order to automate build on a server, I had to do the following:
Make a user with root access on the destination server
Add rsa-gen public key to authorised_keys of destination server, for passwordless login.
Created script with 1st command being ssh user#dest.
The problem we are facing is that command execution still asks for sudo... How do we achieve this in a script or otherwise?
There is a plugin to make this simple.
SSH plugin can take server details along with credentials and can handle all of it for us.
To use it, follow these steps:
install SSH plugin on your jenkins from Manage plugins.
go to configure system, add under SSH remote hosts.
add all the details required to connect to server
In your job, add build step - Execute shell script in remote host using ssh.

GlassFish 3.1.2 - validate-dcom fails with "The remote file, C: doesn't exist" (Centralized Administration with Windows DCOM)

OS - Windows 2008 server R2 X 2 (firewall disabled on both machines)
I wish to take advantage of GlassFish 3.1.2 Windows DCOM feature to setup communication between GlassFish DAS and a remote node. I've successfully followed Byron Nevins instructions on using GlassFish 3.1.2 DCOM Configuration Utility
However I'm having an issue validating DCOM following the instructions in GlassFish 3.1.2 Guide - 2 Enabling Centralized Administration of GlassFish Server Instances
When I run command validate-dcom --passwordfile C:/Sun/AppServer/password.txt -v 192.168.0.80 I get the following output:
asadmin> validate-dcom --passwordfile C:/Sun/AppServer/password.txt -v 192.168.0.80
remote failure:
Successfully verified that the host, 192.168.0.80, is not the local machine as required.
Successfully resolved host name to: /192.168.0.80
Successfully connected to DCOM Port at port 135 on host 192.168.0.80.
Successfully connected to NetBIOS Session Service at port 139 on host 192.168.0.80.
Successfully connected to Windows Shares at port 445 on host 192.168.0.80.
The remote file, C: doesn't exist on 192.168.0.80 : Logon failure: unknown user name or bad password.
Password file, password.txt, contains a single entry:
AS_ADMIN_WINDOWSPASSWORD=my-windows-password
I have double-checked I can successfully login with my windows password on the remote machine 192.168.0.80. I've also tried this test with two Windows XP professional machines and get the same error.
Also performed this operation by creating a New Node in Admin Console, got the same error:
Cannot figure what is going wrong or what I may be missing
Thanks in advance
I have had similar issues while setting up the new production env. at work last friday, and could not find any useful information on the interwebs, except people encountering the same issue, some with comments as fresh as the day I was looking it up.
So after a rather excessive amount of painful, in-depth debugging, I was able to figure out a few things:
You must explicitly specifiy the local windows user you create for the purpose of running glassfish in both the add-node dialog, and the validate-dcom subcommand (option -w), else it will either default to 'admin' or the user the DAS is running as.
There is a bug in validate-dcom that causes it to ignore whatever you specify as the test directory. No matter what you do it will always use C:\, and result in "access-denied".
The documentation omits another registry key that must be given access to in order for WMI to work
Regarding the first issue, you will most likely encounter it if your nodes are not part of a domain or you are using a local account. Windows NT6+ has a new default security policy that prevents local users from elevating privileges over the network, which causes that test to fail, necessarily, seeing how writing to the root of a system drive not something one can do without elevation.
I previously blogged about it for someone to stumble upon it if needed:
http://www.raptorized.com/2008/08/19/access-administrative-shares-on-server-2008vista/
The gist of it is that you have to navigate to the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
and create a new DWORD named LocalAccountTokenFilterPolicy with a value of 1.
There is no need to reboot, the first, broken test should pass. However you will then see an error about being unable to connect to WMI, and it will fail again.
To remedy this, you must also take ownership and grant your local service account user full control over the following registry key, in addition to the other ones described in the HA Administration Guide:
HKEY_CLASSES_ROOT\CLSID\{76A64158-CB41-11D1-8B02-00600806D9B6}
Afterwards, validate-dcom should report success and you will be able to add it as a node, and create instances on it.
I hope this helps, because the seeming lack of activity from Oracle on that issue was infuriating.
I am also less than pleased by the hackish, ugly, insecure nature of the DCOM support in Glassfish 3 :(

Amazon EC2 How Do I Host My Own Content on a Bitnami-Wordpress Instance

I created an instance to host my wordpress blog. I made a keypair, converted it using PuTTY Gen so that it would work with winscp.
My security group that is associated with my instance has:
ICMP Allow All
TCP 0-65535
TCP 22 (SSH)
TCP 80 (HTTP)
TCP 443 (HTTPS)
UDP 0-65535
I am running a Bitnami-Wordpress 3.2.1-0 Ubuntu AMI
My Question is: How do I host a simple file on my instance?
UPDATE: so I was able to login using SFTP by simply filling in my instance Public DNS as my host, and the PuTTY Gen key as the private key, the username I had to use was Bitnami. So now I have access to the server, how or where do I put a file so that it will come out www.mywebsite.com/myfile.file???
I am assuming that I need to SSH into the server using putty, and add it into the WWW directoroy?
What I have tried:
I tried logging in using WinSCP with host name being my instance's Public DNS, and my private key file the converted PuTTY Gen file that was originally the key pair for the instance.
Using SFTP, pressing login it asks me for a user name, entering "user" or "ec2-user" I get an error saying:
"disconnected, no supported authentication methods available (server sent: public key), Server >refused our key. Authentication failed.
Using root for the username, it asks for a passphrase that I created for my keypair using PuTTY Gen, It accepts it, but then I get this error:
"Received too large (1349281121 B) SFTP packet. Max supported packet size is 1024000 B. The error >is typically caused by message printed from startup script (like .profile). The message may start >with ""Plea"". Cannot initialize SFTP protocol. Is the host running a SFTP server?
If in WinSCP I put the username as "user" and the password as "bitnami" (before I press login) (default wordpress password for bitnami AMI) it gives me this error:
Disconnected: No supported authentication methods available (server sent: publickey). Authentication log (see session log for details):Using username: "user". Server refused ourkey. Authentication failed.
I get the same errors using SCP instead of SFTP in WinSCP except when I use SCP and I press login, and I use username "root" it asks me for my passphrase, after entering that I get this error:
Connection has been unexpectedly closed. Server sent command exit status 0. Error skipping startup message. Your shell is probably incompatible with the application (BASH is recommended).
Also, if you want to remove wordpress from the URL, you can use the following instructions I posted on my blog (travisnelson.net):
$ sudo chmod 777 /opt/bitnami/apache2/conf/httpd.conf
$ vi /opt/bitnami/apache2/conf/httpd.conf
changed DocumentRoot to be: DocumentRoot “/opt/bitnami/apps/wordpress/htdocs”
$ sudo chmod 544 /opt/bitnami/apache2/conf/httpd.conf
$ sudo apachectl -k restart
Then in WordPress, change the Site address (URL) in General Settings to not have /wordpress.
Hope this helps
If you are already able to connect using SFTP. Now you just need to copy the file. Where you need to copy it depend on what you are trying to do.
BitNami Wordpress AMI has the following directory structure (I only include the relevant directories for this question):
/opt/bitnami
|
|-- apache2/htdocs
|-- apps/wordpress/htdocs
You mentioned that you want to www.mywebsite.com/myfile.file. If you didn't modify the default apache configuration you will need to copy file in /opt/bitnami/apache2/htdocs (this is the DocumentRoot for the BitNami WordPress AMI.
If you want that file to be accessed from www.mywebsite.com/wordpress/myfile.file, then you need to copy it in /opt/bitnami/apps/wordpress/htdocs.
If what you are trying to do is to manually install a theme or plugin you can follow the WordPress documentation taking into account that the wordpress installation directory is /opt/bitnami/apps/wordpress/htdocs.
Also, you can find below some links to the BitNami Wiki explaining how to connect to the AMIs. I just include them as a reference for other users that find the same connection issues.
Further reading:
How to connect to your amazon instance
How to upload files from Windows
I had a similar problem recently. Having setup Bitnami Wordpress on AmazonAWS I was unable to modify, add, or remove themes from within the Wordpress admin interface even though all of my permissions were setup appropriately according to Wordpress recommended settings. However, I did not want to have to resort to turning FTP access on.
I was able to resolve the issue by:
Setting the file access method for Bitnami Wordpress to 'direct'.
Changing all users to Apache Bitnami.
Adding Bitnami to Apache group and Apache to Bitnami group.

Resources