Organise AWS CodeCommit repositories in groups/projects - aws-codecommit

I am just starting with AWS CodeCommit and wonder, how can I organize Repos into projects/groups.
Coming from GitLab, I can create a "Group" and within this group, I create my repos.
For example, I have a project "MyApp" with a server, web-client, ios-client, android-client repo, I have 4 repos in the group "MyApp". You get the idea :-)
I cannot find anything like that in CodeCommit.
I did find "Tags". Is that the Amazons solution for groups?
Or am I missing something?
Thank you!

This feature is, regrettably, not supported in CodeCommit as of today. CodeCommit provides ways of sharing repositories with different entities and grant them access [1][2][3], but there is no feature to group different repositories together on the CodeCommit console in a visual way that is comparable to what GitLab Groups does today.
References:
[1] https://docs.aws.amazon.com/codecommit/latest/userguide/how-to-share-repository.html
[2] https://docs.aws.amazon.com/codecommit/latest/userguide/auth-and-access-control.html
[3] https://aws.amazon.com/blogs/devops/refining-access-to-branches-in-aws-codecommit/

Related

How to check if there are sufficient resources to create an instance?

I want to check whether there are sufficient resources available to create an instance of specific flavor without creating the instance. I tried stack --dry-run but it does not check whether there are available resources.
I also tried to go through CLI and rest api docs, but did not find any other solution than
to check available resources on each hypervisor and calculate it manually (compare it with resources required by the flavor). Isn't there any easier solution, like CLI command that would give me a yes/no answer?
Thank you.

Can you just store binaries?

We are using Artifactory Enterprise and, in addition to "normal" usage, we would like to just store some binaries in Artifactory. This is so we can limit egress and pull the binaries from Artifactory instead of the general Internet. Is this possible? Is there a documentation link that will help explain the process?
Yes, this can be done by creating a generic local repository and deploy the binaries thru UI or using the REST API and you can use the binaries from generic local repository. Refer to this blog as well.

Migration of binaries to JFrog Artifactory

Is there a script or any other automated process for migration of artifacts into JFrog? We are currently working on this and need more information to carry out this process. Please help us in achieving this. Thanks in advance.
If you have an existing artifact repository, JFrog Artifactory supports acting as an artifact proxy while you are in the process of migrating to Artifactory.
I would recommend the following:
Create a local repository in artifactory
Create a remote repository in artifactory which points to your current artifact repository.
Create a virtual repository in artifactory which contains both the local and remote repositories.
Iterate on all your projects to have them publish to the local artifactory repository and pull from the virtual repository.
The advantage to this workflow is that you can port things over piece by piece, rather than trying to do it all at once. If you point a dependency at artifactory that hasn't been ported there yet, artifactory will proxy it for you. When the dependency is ported over, it will be transparent to its users.
When you have moved everything to your local Artifactory repository, then you can remove the remote repository from your virtual repository.
The relevant documentation is available here: https://www.jfrog.com/confluence/display/RTF/Configuring+Repositories
For an Enterprise account, I'd suppose S3 storage and a significant number of artifacts, so there will be no easy and automated way to do it. It also highly dependent on the storage implementation of choice in the on-prem solution. If you plan to use S3 storage, JFrog can help to perform S3 replication. In other scenarios, the solution will be different. I suggest contacting the support.

bosh-lite installation on openstack

I have already installed bosh-lite and cloud foundry on single VM using the tutorials at https://docs.cloudfoundry.org/deploying/run-local.html. Is there a way to install the bosh-lite and cloud-foundry on OpenStack?
I searched a lot but could not find a proper answer, what I found is something disconnected like installing bosh and OpenStack on a single VM but I don't know if that can be useful to me.
I am pretty new to cloud-foundry and OpenStack so, the things are pretty confusing for me. My ultimate goal is to deploy and test docker with cloud-foundry which means installing Diego, I could have used cf_nise_installer, but I am not sure if it supports Diego.
Thanks.
I'm not sure why you want to deploy CF and Diego on a single VM on OpenStack.
Why a single VM, could it be 2 or 3?
Why OpenStack, why not AWS, or DigitalOcean, or something else?
Do you need all the features of CF (multi-tenancy, service integration, buildpacks) or is Docker + Diego + Routing + Logging + a nice CLI sufficient, etc?
At any rate, there is no out-of-the-box solution for your exact set of requirements, but you have several options, with tradeoffs:
Use local BOSH-Lite instead of OpenStack. You can deploy Diego to your BOSH-Lite alongside your CF, and play around with the Docker support there. See instructions here: https://github.com/cloudfoundry-incubator/diego-release/#deploying-diego-to-bosh-lite
Use Lattice. Lattice is basically Diego, along with routing, log aggregation, and a CLI to make it easy to push Docker-based apps, scale them up and down, get logs, etc. You will not get the full CF feature set, for instance there is no UAA which CF uses for user authentication, managing multi-tenancy, scopes and permissions, etc. You can check out the Lattice website: http://lattice.cf/. Docs on deploying Lattice are here: http://lattice.cf/docs/terraform/. You can see several deployment options there, including OpenStack if you search the page.
If you're willing to do a lot more work, you could either figure out how to make BOSH-Lite work against the OpenStack provider, or you could figure out enough about how BOSH manifests are structured and then use bosh-init to deploy a single-VM deployment to OpenStack that colocates all of CF and Diego into a single "job".

Nexus Repository Manager

We want to install Nexus in an environment where >100 developers use this.
What is the max load that Nexus could handle. We might have >50 simultanesous requests for artifacts (a fresh local repo is used on every build)
I want to have multiple instances share the repo storage (I have tried and it does not work but wondering if anyone has tried to do this). We want to have one instance of Nexus in Read mode that is in sync with other one in Read/Write mode. Any possibilities?)
Please share your thoughts. Thanks in advance.
We have many huge customers running Nexus without load problems, including shops on the order of 10's of thousands of users. That's in addition to the large and public instances at:
http://repository.apache.org
http://nexus.codehaus.org
http://maven.atlassian.com
https://repository.jboss.org/nexus
http://oss.sonatype.org
and many others.
Two Nexus instances can't effectively share the entire storage folder because Lucene is used for many things and those files aren't shareable. It might be possible to share just the repo folder, but the indexes and caches will be out of data on the standby.
Redundancy is something we're working on in commercial Nexus versions.

Resources