Read-only anonymous access to Azure Container registry - azure-container-registry

Is there a way to configure an Azure Container Registry (managed) to support anonymous access? The containers in my registry are boring (Ubuntu + some packages) and are used for building on Travis CI.
At the moment, the approach I'm planning to take is to create a Service Principal and just embed the key in my .travis.yml, effectively "leaking" it.
Travis's secret environment variables feature has a sensible default of not sharing encrypted variables with builds done in response to a pull request, but for my purposes, I do want pull requests to be able to pull the images. Thus, it doesn't appear applicable to my problem.

Configuring an Azure Container Registry for Read-Only Anonymous Pull Access
Setting up your Azure Container Registry (ACR) for public read-only access (unauthenticated anonymous pull access) is a feature that users can optionally enable on ACR Registries. Once enabled, any internet user can pull content from within the registry without needing to authenticate.
This feature is currently available as a preview feature for Standard and Premium SKU registries.
Enabling Anonymous Access
To enable anonymous access on an existing registry, update a registry using the az acr update command and pass the --anonymous-pull-enabled parameter. By default, anonymous pull is disabled in the registry.
az acr update --name myregistry --anonymous-pull-enabled
Disabling Anonymous Access
Disable anonymous pull access on an existing registry by setting --anonymous-pull-enabled to false.
az acr update --name myregistry --anonymous-pull-enabled false

ACR currently does not support anonymous access. You can find the roadmap and issues here - https://github.com/Azure/acr/blob/master/docs/acr-roadmap.md

As of March 2019, read-only/anonymous access is in public preview. You need to contact the ACR team to have it enabled. From the feature request's completion notes:
Anonymous (referred to as public registries) are now supported for preview customers. Preview is for two reasons:
Most customers want only a subset of their registry to be anonymous. We need to finish the repo-permissions feature (https://feedback.azure.com/forums/903958-azure-container-registry/suggestions/31655977-configure-permissions-at-a-repository-level) to enable anonymous repos.
To support the range of customers, from small to large, we’re working on a new consumption based billing tier. This way small use-case customers don’t have to pay for a premium registry, and large customers can get the scaling they need, based on their use.
To enable public repos, please contact acrsup#microsoft.com.
Once repo permissions and the new billing tier are complete, we’ll have this directly available in the portal and cli.
Until the new pricing tier is available, public repos will be limited to standard and premium tiers.

Related

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation?

How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR

Google Scope Authorizations Loop Endlessly When Previewing or Publishing Apps with Cloud SQL Database

My organization set up Cloud SQL as the default for Google App Maker about one month ago. In the last week, we have been unable to preview or publish apps that use Cloud SQL data sources, including the sample applications which worked perfectly before. The failure occurs during the authorization process. When previewing or publishing an app, Google App Maker displays a dialog stating "Deploying this app requires authorization". Next it prompts the user for their Google account and then requests approval for the necessary authorizations (e.g., "Manage the data in your Google SQL Service instances"). After approving the authorization, the prompts to authorize begin over with the dialog stating "Deploying this app requires authorization".
Observations:
We have repeated this problem on multiple different computers, networks, and four different user accounts.
In the SQL cloud console, our Cloud SQL instance shows new databases being created for each app along with new database-specific user accounts
All of the databases appear as expected when I log directly into the Cloud SQL database using phpMyAdmin
Other apps which don't use a Cloud SQL datasource work fine, including an app that uses a calculated data source which is hosted in the same Cloud SQL instance
The only errors in the Stack driver logs for the Cloud SQL database showed "INFO" level communication errors with the database (aborted connection...Got an error reading communication packets)
I'm unable to find Stack driver logs for the apps because I cannot preview or publish them (either option would provide a link to the Stack driver logs)
There are now approximately 20 databases in our SQL instance (mostly associated with simple app tests) and we have only used 1 GB of 10 GB of space in our SQL instance
I haven't seen any related problems on the Google Issue Tracker for Google App Maker
I'd appreciate any help or suggestions on what to check in order to resolve this issue.
I posted an issue to Google Issue Tracker and Google corrected the problem. They also provided a workaround if this problem happens again.
Here is the response from the Google development team posted on Google Issue Tracker: https://issuetracker.google.com/issues/145345198
It's great to hear your up and working again! We are aware of this issue and are working through a longer term fix. The specific bug appears to be related to some changes made in the Google Cloud session policy control that may have rolled out to your domain recently interacting with AppMaker in a way that was not expected. We've spent time diagnosing the underlying issue and we beleive we know the root cause. I suspect your domain admin did a version of the workaround below.
Without getting too far into the details, the specific bug is that for a Deployer of an AppMaker application, if the Google Cloud Session policy is set with any expiration time, the returned token AppMaker sees is invalid, triggering a loop in AppMaker trying to generate a valid security token. Historically, these session tokens never expired but recently there was beta feature launch that allowed domain admins to set them to expire. We strongly suspect your domain recently set this expiration policy explicitly and that's what is causing the bug.
The good news is that these policies are overridable per Organizational Unit and we have tested that OUs which have the original classic Never Expire setting do, in fact, allow AppMaker to work.
My suspicion is that your domain admin has reverted recent, local changes to your organizational policy under the admin.google.com console, specifically under Security > Google Cloud session control (Beta).
If this happens again, here the workaround we would recommend. Note you don't need to do this if you're currently up and working. You will need the help of someone with admin.gogole.com powers, specifically User and Organizational Unit powers at your organization. It is a slight increase in security risk but it restores some classic behavior that was standard until recently.
The summary of the workaround is to override the Google Cloud session control expiration setting such that individuals who need access to AppMaker deployments can have it. To mitigate systemic security risk, this is best done by creating a limited purpose Organizational Unit with just that setting different than the parent OU settings.
The workaround is to:
Contact someone in your domain with Admin powers for your Google for Business license.
Have your admin proceed to https://admin.google.com. The actions below need to be performed by a domain admin.
Under the Users section, identify the specific user account that needs the ability to deploy AppMaker Apps.
Identify the Organizational Unit of that Appmaker dev user and make a note of it.
Under the Organization Units settings, locate the Organization Unit you identified above.
Create a new Organization Unit underneath that user's current Organizational Unit with some descriptive identifying it as special w.r.t AppMaker. So for Developers, make something like DevelopersWhoAreAlsoAppMakerDevs.
Back under the Users tab, locate the user from step 3. Move this user into the new Organizational Unit you've just created. This change can take a while to propagate.
-Interlude- At this point, you've made a new Organizational Unit for just that individual and added them to it. You can certainly add multiple people to that OU, especially if they're already in the same parent OU. Use your discretion as to what amount of Organizational rework you wish to pursue. You may not be using OUs at all or you may decide to just turn off this control for the whole domain. It's up to you.
Under admin.google.com's Security settings, locate the Google Cloud session control (beta) settings.
Under this panel, from the dropdown menu on the left, locate the Organization Unit you just created.
Be sure to select ONLY the OU you intend to change.
Change the "Google Cloud Console and Google Cloud SDK session control" from expiring to "Session Never Expires".
Save your changes.
The account you selected in step 3 should now be able to deploy AppMaker apps.
It appears this OU change is only necessary for the deployer of an AppMaker app, not an individual user. Note also that if you have multiple AppMaker developers who all have different current OU settings, you may need to create multiple daughter OUs to avoid a sudden radical shift in OU settings for an individual account.

Is it possible to enable using Google Cloud Endpoints Portal without granting extra permissions to access GCP projects on client side?

I have successfully deployed a Google Cloud Endpoints Developer Portal for my API running on Endpoints. I would like to provide access to testing to people outside my organisation that are not using GCP in their projects.
Login to the portal works correctly if I enable the Service Consumer role for these people (on per-email basis). However, when they open it for the first time, they are being asked to grant some extra permissions to the portal:
This form can create totally unnecessary security concerns. Does anyone know, why is it needed?
I only would like my clients to be able to test my API using a GUI, before they could start connecting their projects (not necessary on GCP) to mine. This seems to be a valid use case for me, however I might be misunderstanding some basic concepts.
Or should I submit a feature request to Google about a new role that only enables the access to the portal, and nothing else, so no such forms are shown?
Since Endpoints APIs must be explicitly shared with customers, the portal needs to verify that the logged-in user has permission to view that Endpoints API. So the short answer is that these scopes are being requested primarily so the portal can check the user's access to this API.
Longer answer is that we (the Endpoints team) are looking into if it's possible to build narrower OAuth scopes that would correspond to the access checks we perform. We agree that it's unnecessarily broad of an access request and are hoping to improve this in the future. Thanks for your comment!

Explicitly allow usage of production API

I'm exploring WSO2 API Manager platform to use in Open API project. The idea is that we forbid registration in Store and creating users by ourselves. But we also want to give them only Sandbox API as a starting point and then, explicitly allow particular users to consume Production API. Haven't find any information. Is it possible? If yes - where to look?
You can restrict the token generation for the Production endpoints by using Workflows. Follow the documentation[1].
You could configure ProductionApplicationGeneration to use ApplicationRegistrationWSWorkflowExecutor and SandbobApplicationGeneration to use ApplicationRegistrationSimpleWorkflowExecutor.
With this approach if the subscriber tried to generate a token for production endpoints, it will trigger a human task, which needs to be approved from the Admin Portal.
For your requirement, you could write a custom workflow extension which allows restriction by role or user name. For more information on Writing custom workglow extension please follow [2]
[1] https://docs.wso2.com/display/AM210/Adding+an+Application+Registration+Workflow
[2] https://docs.wso2.com/display/AM210/Customizing+a+Workflow+Extension
Thanks and Regards

How can I use cctrlapp without constantly entering credentials?

I've started playing experimenting with cloudcontrol.com. They provide a cli application called cctrlapp for managing projects.
However, many useful operations require a login. It is cumbersome and frustrating to have to put in my email address and password every time I push the current state of my app.
Can ccrtlapp be configured to use stored credentials?
Recommended: We now support authentication via public-keys between CLI and API which is more secure and more convenient. To set it up, simply run:
$ cctrluser setup
Read more about this here: http://www.paasfinder.com/introducing-public-key-authentication-for-cli-and-api/
Alternatively: You can set your credentials via the 'CCTRL_EMAIL' and 'CCTRL_PASSWORD' environment variables. If set, they're automatically used by cctrlapp.

Resources