Why are auth rule changes not considered configuration changes by Amplify pull? - aws-amplify

Recently, I changed the access rules for my data model in AWS Amplify Console and then deployed them, but when I executed a subsequent amplify pull the pre and post pull status indicated "no changes" for any of my Amplify functionality, even though the relevant files were updated as expected; for example rules changing from
rules: [{allow: public}])
to
rules: [{allow: public}, {allow: owner}, {allow: private, operations: [read]}, {allow: groups, groups: ["Testers"], operations: [read, create, update]}])
Why is this not considered a change by amplify pull, even though it is amplify pull that is responsible for applying them?

Related

Jfrog Artifactory repository creation and permission automation

We are using Jfrog Artifactory and looking for a way to automate the Repo, Group and permission creation for a list of items as part of a Azuredevops pipeline.
For example, I want to create a virtual Repo called "myproject-mvn-repo" with all its subcomponents as below.
create a virtual repository: myproject-mvn-repo
link existing or create remote repo for maven (if not existing): myproject-mvn-remote-repo
Create 2 local repos if not existing :- myproject-mvn-release-local-repo
- myproject-mvn-snapshot-local-repo
Create a security group for the Repos: - myproject-sg
Create 2 type permission for the Repos and related builds : myproject- developers (read write)
myproject-contributors (read/write/manage)
Add users to the group subsequently
I tried to follow the Jfrog document , but couldn't loop through for a number of items and would need to make it as idempotent(shouldn't create/modify any repo or component if already present)
Let's split it into 2 parts - managing repositories, and managing permissions.
Repositories
In order to create / update / delete multiple repositories in a single request you can use the Artifactory YAML Configuration.
For example (simplified):
PATCH /artifactory/api/system/configuration
Content-Type: application/yaml
localRepositories:
myproject-mvn-release-local-repo:
type: maven
...
myproject-mvn-snapshot-local-repo:
type: maven
...
remoteRepositories:
myproject-mvn-remote-repo:
type: maven
url: ...
...
virtualRepositories:
myproject-mvn-repo:
type: maven
repositories:
- myproject-mvn-release-local-repo
- myproject-mvn-snapshot-local-repo
- myproject-mvn-remote-repo
...
Note - this is a PATCH request, which means that if a repository already exists it will not fail the request, but it will update its configuration based on the settings in this request.
Permissions
For managing permissions there are also two options - using projects (preferred), or using groups and permission targets.
Using Projects
From the documentation:
JFrog Projects is a management entity for hosting your resources (repositories, builds, Release Bundles, and Pipelines), and for associating users/groups as members with specific entitlements. As such, using projects helps Platform Admins to offload part of their day-to-day management effort and to generate a better separation between the customer products to improve customer visibility on efficiency, scale, cost, and security. Projects simplifies the onboarding process for new users, creates better visibility for LOBs and project stakeholders.
You can create projects, assign roles to users and groups in projects, assign repositories to projects, and more. Projects can be managed using REST API, specifically (but not limited to):
Add a New Project - to create a new project
Update User in Project - add a user as a member of the project with given roles
Update Group in Project - add a group as a member of the project with given roles
Move Repository in a Project - to assign a repository to a project
Using Groups and Permission Targets
Manage groups using REST API. First try to create a group. If a group already exists it will return a 409 Conflict, then use update group instead, or just add / remove members to the group.
For example - create group myproject-developers with alice and bob as members (simplified):
POST /access/api/v2/groups
Content-Type: application/json
{
"name": "myproject-developers",
"description": "My project developers",
"members": ["alice", "bob"],
...
}
Manage permissions - use REST API to create / replace permission targets, aggregating the repositories and granting each group its relevant permissions on those repositories.
For example (simplified):
PUT /artifactory/api/security/permissions/myproject-permissions
Content-Type: application/json
{
"name": "myproject-developers",
"repositories": [
"myproject-mvn-release-local-repo",
"myproject-mvn-snapshot-local-repo",
"myproject-mvn-remote-repo"
],
"principals": {
"groups" : {
"myproject-developers" : ["r","w"],
"myproject-contributors" : ["r","w","m"]
}
},
...
}

Delete Terraform resource aws_secretsmanager_secret_version does not delete Secrets Manager secret entry

I created an AWS secrets manager and a secret key-value entry using Terraform as below. However, After I comment out below aws_secretsmanager_secret_version resource and terraform apply, terraform shows it deletes the secret key-value entry, but I can still see the entry in AWS console and I can still use CLI to get the secret key-value using aws secretsmanager get-secret-value --secret-id myTestName.
Is this entry really deleted? Why I still see it in AWS console? or maybe it is deleted but the one shown in console and cli is an old version? at least Terraform deleted it from its state file.
resource "aws_secretsmanager_secret" "test" {
name = "myTestName"
}
# I deleted secret key-value entry by
# commenting out below and apply terraform again
resource "aws_secretsmanager_secret_version" "test" {
secret_id = aws_secretsmanager_secret.test.id
secret_string = <<EOF
{
"test-key": "test-value"
}
EOF
}
According to AWS documentation:
...Secrets Manager does not immediately delete secrets. Instead,
Secrets Manager immediately makes the secrets inaccessible and
scheduled for deletion after a recovery window of a minimum of
seven days...
Due to critical nature of the secrets, this functionality is there for a reason - to prevent you from accidentally deleting important production-grade secret, which would cause serious problems with accessing services.
If you still want to delete a secret, you can do it with force:
aws secretsmanager delete-secret --secret-id your-secret --force-delete-without-recovery --region your-region
You may need to delete it with force if you want to immediately create new secret with the same name, to avoid name conflict.
Update: As you clarified, for you specific case - where you wish to delete the version of the secret, it cannot be done while you have only one version of the secret with the AWSCURRENT label:
aws secretsmanager get-secret-value --secret-id myTestName
...
"Name": "myTestName",
"SecretString": " ...
"VersionStages": [
"AWSCURRENT"
]
...
From the terraform documentation:
If the AWSCURRENT staging label is present on this version during
resource deletion, that label cannot be removed and will be skipped to
prevent errors when fully deleting the secret. That label will leave
this secret version active even after the resource is deleted from
Terraform unless the secret itself is deleted. Move the AWSCURRENT
staging label before or after deleting this resource from Terraform to
fully trigger version deprecation if necessary.

GCloud databaseType is set to CLOUD_FIRESTORE but I'm using RealTimeDatabase

My project uses the RTB but by mistake I activated Cloud Firestore. I don't use Firestore for anything and apparently there isn't anyway to disable Firestore.
I created a Cloud Task and when I run $ gcloud app describe it says databaseType: CLOUD_FIRESTORE. How can I change the databaseType to RTB?
$ gcloud app describe
authDomain: gmail.com
codeBucket: staging.xyz.appspot.com
databaseType: CLOUD_FIRESTORE // <------------------Here
defaultBucket: xyz.appspot.com
defaultHostname: xyz.ue.r.appspot.com
featureSettings:
splitHealthChecks: true
useContainerOptimizedOs: true
gcrDomain: us.gcr.io
id: xyz
locationId: us-east1
name: apps/xyz
serviceAccount: xyz#appspot.gserviceaccount.com
servingStatus: SERVING
According to this, I do have access to the databaseType field but I don't know how to change it or what value to change it to for the RealTimeDatabase. At the bottom, in blue, it says:
Note: To create a Firestore in Datastore mode database , set
databaseType to CLOUD_DATASTORE_COMPATIBILITY.
I've looked all over and at the time of this writing there isn't a way to change the databaseType.
On a side note, after finally getting a successful Cloud Task to work and the function associated with it to also work, it didn't prevent anything from working.
Basically as far as you only using the RTB and databaseType being set as CLOUD_FIRESTORE, you have nothing to worry about.

Firebase + Datastore = need_index

I'm working through the appengine+go tutorial, which connects in with Firebase: https://cloud.google.com/appengine/docs/standard/go/building-app/. The code is available at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine/gophers/gophers-6, which aside from my Firebase keys is identical.
I have it working locally just fine under dev_appserver.py, and it queries the Vision API and adds labels. However, after I deploy to appengine I get an index error on datastore. If I go to the Firebase console, I see the collection (Post) and the field (Posted) which is a timestamp.
If I change this line: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/appengine/gophers/gophers-6/main.go#L193 to remove the Order("-Posted") then everything works (it's important to note that any Order call causes it to error, except the test records I've posted come in random order.
The error message when running in appengine is: "Getting posts: API error 4 (datastore_v3: NEED_INDEX): no matching index found."
I've attempted to create a composite index, or test locally with --require_indexes=true and it hasn't helped me debug the issue.
Edit: I've moved this over to use Firebase's Datastore libraries directly, instead of the GCP updates. I never solved this particular issue, but was able to move forward with my app actually working :)
By default the local development server automatically creates the composite indexes needed for the actual queries invoked in your app. From Creating indexes using the development server:
The development web server (dev_appserver.py) automatically adds
items to this file when the application tries to execute a query that
needs an index that does not have an appropriate entry in the
configuration file.
In the development server, if you exercise every query that your app
will make, the development server will generate a complete list of
entries in the index.yaml file.
When the development web server adds a generated index definition to
index.yaml, it does so below the following line, inserting it if
necessary:
# AUTOGENERATED
The development web server considers all index definitions below this
line to be automatic, and it might update existing definitions below
this line as the application makes queries.
But you also need to deploy the generated index configurations to the Datastore and let the Datastore update indexing information (i.e. the indexes to get into the Serving state) for the respective queries to not hit the NEED_INDEX error. From Updating indexes:
You upload your index.yaml configuration file to Cloud Datastore
with the gcloud command. If the index.yaml file defines any
indexes that don't exist in Cloud Datastore, those new indexes are
built.
It can take a while for Cloud Datastore to create all the indexes and
therefore, those indexes won't be immediately available to App Engine.
If your app is already configured to receive traffic, then exceptions
can occur for queries that require an index that is still in the
process of being built.
To avoid exceptions, you must allow time for all the indexes to build.
For more information and examples about creating indexes, see
Deploying a Go App.
To upload your index configuration to Cloud Datastore, run the
following command from the directory where your index.yaml is located:
gcloud datastore create-indexes index.yaml
For information, see the gcloud datastore reference.
You can use the GCP Console, to check the status of your indexes.

Multiple applications in the same Symfony2 application

This is quite a long question, but there's quite a lot to it.
It feels like it should be a reasonably common use case, so I'm hoping the Stack Overflow community can provide me with a 'best practice in Symfony2' answer.
The solution I describe below works, but there are several consequences I'd like to avoid:
In my local dev environment, if I have used the wrong db connection the test will work in dev but fail on production
The routes of the ADMIN API are accessible on the PUBLIC API url, just denied.
If I have a mirror of live in my dev environment (3 separate checkouts with the corresponding parameters.yml file) then the feature tests for the other bundles fail
Is there a 'best practice in Symfony2' way to set up my project?
We're running a LAMP stack. We use git/(Atlassian) stash for version control.
We're using doctrine for the ORM and FOS-REST with OAuth plus symfony firewalls to authenticate and authorise the users.
We're committed to use Symfony2, so I am trying to find a 'best practice' solution:
I have a project with 3 applications:
A public-facing API (which gives read-only access to the data)
A protected API (which provides admin functionality)
A set of batch processes (to e.g. import data and monitor data quality)
Each application uses a set of shared models.
I have created 4 bundles, one each for the application and a 4th for the shared models.
Each application must use a different database user to access the database.
There's only one database.
There's several tables, one is called 'prices'
The admin API only must be accessible from one hostname (e.g. admin-api.server1)
The public API only must be accessible from a different hostname (e.g. public-api.server2)
Each application is hosted on a different server
In parameters.yml in my dev environment I have this
// parameters.yml
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: user2
api_admin_db_pass: pass2
batch_db_user: user3
batch_db_pass: pass3
In config.yml I have this:
// config.yml
doctrine:
dbal:
connections:
api_public:
user: "%api_public_db_user%"
password: "%api_public_db_pass%"
api_admin:
user: "%api_admin_db_user%"
password: "%api_admin_db_pass%"
batch:
user: "%batch_db_user%"
password: "%batch_db_pass%"
In my code I can do this (I believe this can be done from the service container too, but I haven't got that far yet)
$entityManager = $this->getContainer()->get('doctrine')->getManager('api_public');
$entityRepository = $this->getContainer()->get('doctrine')->getRepository('CommonBundle:Price', api_admin');
When I deploy my code to each of the live servers, I put junk values in the parameters.yml for the other applications
// parameters.yml on the public api server
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: **JUNK**
api_admin_db_pass: **JUNK**
batch_db_user: **JUNK**
batch_db_pass: **JUNK**
I have locked down my application so that the database isn't accessible (and thus the other API features don't work)
I have also set up Symfony firewall security so that the different routes require different permissions
There's also security in the apache vhost to deny access to say the admin api path from the public api directory.
So, I have secured my application and met the requirement of the security audit, but the dev process isn't ideal and something feels wrong.
As background:
We have previously looked at splitting it up into different applications within the same project (like this Symfony2 multiple applications and api centric application. Actually followed this method http://jolicode.com/blog/multiple-applications-with-symfony2) , but ran into difficulties, and in any case, Fabien says not to (https://groups.google.com/forum/#!topic/symfony-devs/yneojUuFiqw). That this existed in Symfony1 and was removed in Symfony2 is enough of an argument for me.
We have previously gone down the route of splitting up each bundle and importing it using composer, but this caused too many development overheads (for example, having to modify many repositories to implement a feature; it not being possible to see all of the changes for a feature in a single pull request).
We are receiving an ever growing number of requests to create APIs, and we're similarly worried about putting each application in its own repository.
So, putting each of the three applications in a separate Symfony project / git repository is something we want to avoid too.

Resources