JFrog UI & Service Crash abruptly with the reason as Bad Artifactory Health due to high number of Bamboo Builds - artifactory

JFrog UI & Service Crash abruptly with the reason as Bad Artifactory Health due to high number of Bamboo Builds.
My Jfrog is a 'self-hosted' Pro version on a Data Canter VM and its Database is an inbuilt Derby DB. When the number of Bamboo Builds cross 80 at any point of time, JFrog Service & UI go down. We have 168 Agents available for Bamboo Builds. When the JFrog Service goes down, all the local Builds of Team Members fail as every Build needs to download packages from JFrog. So, each time I need to restart the JFrog Service manually but this is not a correct approach as JFrog goes down abruptly anytime.
We have an interim solution where we have split each Test Plan into multiple Stages & assigned max 8 Jobs per Stage, thus limiting the number of Builds for the Plan to be max 8 at any point of time. Earlier, we had just 1 Stage for the Test Plan with around 50 Jobs, so 50 Builds used to trigger at one time which caused a lot of load on JFrog. But, we feel that this may not be a permanent solution.
Can someone please let me know any configurations to be done or any other settings in JFrog which makes sure that JFrog is never down even if there are huge number of high intensity Builds ?
Expectation : What are the configurations or any other settings to be done in JFrog which makes sure that JFrog never goes down even if there are huge number of high intensity Builds.

Related

Artifactory pro v7.30.x fails to start (multiple versions and installation methods)

I am evaluating a self-hosted artifactory installation on a trial license. I followed the official installation instructions for the docker container and the linux archive file. Neither of these installation options are working. The artifactory service fails to start.
I have opened an issue to track the problem: https://www.jfrog.com/jira/browse/RTFACT-27182
TL;DR; A component fails, a nasty stack trace appears in the logs, and eventually the services stop.
It would seem that there is a bug in artifactory. I have traced this back to multiple versions and this issue spans multiple years.
The problem appears to be that artifactory cannot get past the bootstrapping/initialization phase when started with artifactoryctl. At a certain point (around 2-5 minutes in) all the services stop and a pid file is left over, which is bad.
The workaround I have found is that the service can pass this initialization phase only after multiple start/stops (3 to be exact). In other words, we call artifactoryctl start, wait for all failures, then artifactoryctl stop and repeat two more times. On the fourth and final start, we will see the service come online (in about 150 - 190s). From then on, the service will start correctly with one call to artifactoryctl start.
I have not yet looked at the systemd unit file. My guess would be that it has/or could be made to have a number of retries to work around this issue and perhapse this issue does not exist when using the service wrapper.
I have also not yet looked again at the docker container which appears to be failing for the same reason. A workaround off the top of my head would be to modify the entrypoint script. If you were to dockerk exec into the container and try the workaround above it would likely terminate the root process and kill the container.

How did my Artifactory generic and docker repos suddenly change type/version?

We have been running Artifactory (currently version 6.9.0) in EC2 for months now with no problems. This was originally a licensed instance of Enterprise Artifactory that we let lapse (intentionally).
Last week we started getting a storage warning (we use cluster-s3 storage) that we were at 95% utilization (which disables uploads) so we started cleaning up old artifacts (i.e., binaries, Docker images) to get the storage down. We got it down for a while, but it crept back up -- high enough this time that we couldn't ssh in, so we rebooted the machine via the EC2 Console.
It came right back with no obvious problems. Then we deleted a generic repository that someone had set up as a back up of another system (300GB) which bought us back plenty of space.
Today, a number of our builds started failing because the step to push the artifact to Artifactory failed. Upon further investigation, a number of our "generic" repositories are now appearing (and behaving) as "Docker" repositories. Further, a number of our v1 Docker repositories are now reporting as v2 Docker repos and blocking standard pushes from v1 clients.
The docs are pretty clear that we can't change the repo type, and I'm not seeing a way to migrate back to v1 from v2 Docker repos. I'm currently exporting one of the repos to see if we can import it as the right type.
Any idea what happened here? Did something get corrupted in the database? What can I even start to check?

Coded UI Automated Test Case through Octopus Tentacle

I am trying to run my Automated test cases deployed on a virtual machine and trying to trigger it with the help of Octopus Deployment tool. I installed test agent and Octopus Tentacle on my machine. Octopus is triggering the DLL's for Automated test cases very well.But while Octopus trying to run the test cases it's giving me an Error as below:-
Microsoft.VisualStudio.TestTools.UITest.Extension.UITestException: To run tests that interact with the desktop, you must set up the test agent to run as an interactive process. For more information, see "How to: Set Up Your Test Agent to Run Tests That Interact with the Desktop" (http://go.microsoft.com/fwlink/?LinkId=255012)
Error 01:59:38
If you are running the tests as part of your team build, you must also set up the build agent to run as an interactive process. For more information, see "How to: Configure and Run Scheduled Tests After Building Your Application" (http://go.microsoft.com/fwlink/?LinkId=254735)
I setup my password in test agent and set it as intractive process but still i am facing the same issue.
I am triggering my DLL's as below through Octopus.
& "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\CommonExtensions\Microsoft\TestWindow\vstest.console.exe" "C:\MyWebaPP\Automated_test\Automated_test.dll"
I tried each and every way i found.Please help me out in this.
Thanks in advance!!
We recently encountered the same problem.
During our research, we found this post on the Octopus support forum:
http://help.octopusdeploy.com/discussions/questions/5080-tentacle-running-interactive-tests
We also contacted Octopus Deploy by mail, and they essentially gave us the same response.
While we had no luck with the "scheduled task for test run" approach, we eventually managed to get it working by running the Octopus Tentacle as a process rather than a service.
The challenge here was making sure the Tentacle would start when our test machine started. We wanted this to happen automatically, so RDPing in and starting the process every time was out of the question (this also caused some additional problems for the UI test run...).
The final working solution was to schedule a task that would start the Octopus Tentacle as an interactive process whenever the machine booted (i.e. run Tentacle.exe directly), and then make sure we never RDP to this machine. Make sure the task has sufficient privileges, and that it "Runs whether the user is logged in or not". Also, remember to disable automatic startup of the Octopus Tentacle Service.
Edit: We had some trouble making this solution work across all our environments. It seems that for security reasons, newer versions of Windows are quite skeptical about allowing scheduled tasks to start interactive processes when there is no user logged on.
We did another search for possible solutions, and came across FireDaemon Pro (commercial software), which allows us to register interactive Windows services that run under a domain user. Not quite sure how it works, but they seem to be able to run a UI from a Windows service in session0 (the UI is also isolated). The Octopus Tentacle starts without complain, and the UI tests run the way we want them to.

Slow meteor build performance in docker container

The build of a meteor project takes a long time if performed in a docker image (> 30 min) but if done on a local machine ore directly on the server without docker everything works fine (2-10 min).
After some research I still do not know how to tackle this issue. Can anyone provide me with further instruction how to debug my docker? The performance seems really bad.
What should be the minimum server hardware to get a quick build time? The project itself is not large.

Dev and deploy management with SVN of a Web Site

Net solution for a website, consisting of 5 projects, and there are a few(less than 10) developers working on the solution. We deploy almost on a daily basis.
The question is, how to setup the SVN repo to support this scenario (the daily deploy), also mentioning that not every commited file should go to production, there is a QA check before deploying.
Try out TeamCity
(CI tool) as its free for smaller amounts of CI. this may be better for you than CruiseControl.Net as CCNET is very configuration heavy as its all done via XML. TeamCity uses wizards to create the scripts to manage releases
if you need any other help on CI then let me know as its something I am evangelistic about.
What you want to do is commonly referred to as Continuous integration (CI).
While you can do that using Subversion, it is probably not the right tool for the job.
There is special CI software, which will allow you to easily automate the necessary tasks (checkout from version control, compiling / building, running automatic tests, deployment etc). An example would be CruiseControl.NET.
As to "not every commited file should go to production", the common solution is to have a special "release" branch, which gets deployed. Only tested code is merged there (or have the trunk always be stable, otherwise same model). Of course, you can also (better: additionally) have tests before your automatic deployment, and only deploy if all tests pass.
Working with a release branch
In practice, this means that people check in their code as they produce it. Sometimes this code will work, sometimes not. When the release time draws nearer, a "release branch" is created in Subversion. This release branch is then effectively a frozen snapshot of the source as it was at the time of branching. Now this branch can be used to compile & deploy the application, which can then be tested.
No new code is checked into the branch (but checkins can continue elsewhere). Only if a bug is detected in the branch, will there be a checkin into the branch to fix it. This continues until the branch passes all tests. Then the branch can be released as a new version of the software; afterwards the branch will only be used if the released version needs to be patched.
Of course, any bugfixes checked into the branch need to also be put into the trunk (either by merging branch -> trunk, for which Subversion provides special support, or by reimplementing the fix in the trunk, as appropriate).

Resources