Phpunit test where store user and password - symfony

I'm using Symfony with functional test.
I have a login controller where the user sends user&password. The controller checks if the user and password are exist (and if the password is right by password hash).
I wish to test it by php unit
The problem I don't know where to store the real user and password for the tests. I don't wish to write every time them and I don't wish to store on the code (and after on the public repository).
The test are done in local (localhost) and on the real server.
Have you some idea what it is the better solution?

Using mod-sec before the prod-server and to remove it during the test session.

You can use different database for test environment in env.test file database env. Before tests use fixture for init data like user ...

Related

How to change encrypted password in context file without using the studio

I am using a group context to configure the db connection. The password of the db has a password type. When deploying the job, the password is automatically encrypted in the default.properties under the contexts folder.
What if i want to change the password without using the studio (on a client environment)? what can i use to encrypt the new password?
I was able to do it by creating a separate encryption job with a tjava component and the following code:
System.out.println(routines.system.PasswordEncryptUtil.encryptPassword(context.Password));
where context.Password is an input context variable of type String. When running the job, the user is prompted to enter a password and then the encrypted Talend password will be printed. It will have the following format: enc:routine.encryption.key.v1:[encryptedPassword] The routine encryption key can be modified if needed by following this link: https://help.talend.com/r/en-US/8.0/installation-guide-data-integration-windows/rotating-encryption-keys-in-talend-studio
There's actually a few ways for this:
myJob.sh --context_param myPassword=pass123
this unfortunately can be seen by anyone via ps / task manager.
You can also edit the contexts/contextName.properties file and change the context parameters there. This way the context can only be seen if you have access to the file.
Theoretically both should be able to accept the cleartext/encrypted password.
Implicit context load feature can also be used to load contexts: https://help.talend.com/r/en-US/8.0/data-integration-job-examples/creating-job-and-defining-context-variables

Allow the user to modify some parameters from .env Symfony

I have some config variables on the .env file. I want to create a page on my web application to allow the administrators to modify the value of some .env variables (for example the mail configured to send mails). For this purpose, I have:
MAILER_SENDER_ADDRESS=backoffice#example.com
MAILER_SENDER_NAME="Application Name"
MAILER_URL=gmail://firstname.lastname#gmail.com:ijfxxiencrrdqihe#localhost
I am able to read the current values on my controller but I don't know how to save back the values filled by the user on my form.
Please, any help would be really apreciated.
Environment variables are there to help you specify variables for the particular environment your application runs on, for example you could have your app sitting locally on your computer which you develop on, and you could have it in the cloud running the production version of your app, version which will actually send emails correctly using real data.
What you need to do is have somewhere to store the settings you let your users customise, for example in a database. When it comes to sending the emails, you will then have to do the following:
$message = (new Swift_Message())
->setFrom(['john#doe.com' => 'John Doe'])
...

How to get Travis CI to work with a SSH Key: currently gets stuck when accessing my private rep(wants the username)

I already followed the steps exactly specified at this link
However, I am still having the issue. My build will get stuck when accessing the private repo.
$ julia --check-bounds=yes -e 'Pkg.clone("https://github.com/xxxx/xxxx.git")'
INFO: Cloning xxxx from https://github.com/xxxx/xxxx.git
Username for 'https://github.com':
Done: Job Cancelled
Note: I manually cancel it after a few minutes of waiting. How can I get it to use the SSH key I have setup and bypass this username and password field?
Note: xxxx is used in place of the name of my project to make this post general. I have already checked out the links on Travis CI and they don't make it clear what needs to occur. Thank you!
Update: I tried to add a GitHub Token Pkg.clone("https://fake_git_hub_token#github.com/xxxx/xxxx.git") and it still prompts me to sign in with the username. I gave that token full Repo access. Also, note that I am using Travis CL Virtual Machine.
In the Travis CI docs they reference the following:
Assumptions:
The repository you are running the builds for is called “myorg/main” and depends on “myorg/lib1” and “myorg/lib2”.
You know the credentials for a user account that has at least read access to all three repositories.
To pull in dependencies with a password, you will have to use the user name and password in the Git HTTPS URL: https://ci-user:mypassword123#github.com/myorg/lib1.git.
SOLUTION:
just add TravisCIUsername:mypassword#github.com/organizer_of_the_repo/Dependancy.git
In my case, I am going to make a fake admin account to run the tests since someone will have to expose their password to use this setup. Note that you can set up 2-factor authentication on the admin account such that only one person can access it even if they know the password.
You need to add the SSH key to the Travis UI under an environmental variable for your desired repo. You also need to add the key to the .travis.yml file on that repo.
https://docs.travis-ci.com is the docs for Travis
SOLUTION: just add Travis_CI_Username:my_password#github.com/organizer_of_the_repo/Dependancy.git to the travis.yml. file.
If this is unclear, please comment and I will update, but this is how I got it to work for me(even tho I went through all the SSH key business).
In my case, I am going to make a fake admin account to run the tests since someone will have to expose their password to use this setup.
Note that you can set up 2-factor authentication on the admin account such that only one person can access it even if they know the password.

Symfony 3 - How to change configuration values at runtime

What is the best practice way to handle changes to configuration parameters (kept in yml) that have to happen at runtime?
I am working on a site where the owner wants to change various settings in his admin back end.
For example, enabling/disabling the confirmation email and link sent by FOS User bundle when a new user registers for an account.
Thanks for your time
For those operations you need the use Compiler Pass.
https://symfony.com/doc/current/service_container/compiler_passes.html
Here sample Custom Compiler pass;
https://symfony.com/doc/current/components/dependency_injection/compilation.html#creating-separate-compiler-passes
Here is a good example for compiler passes; ( Usually using with service tags )
https://symfony.com/doc/current/service_container/tags.html

What's the best method for passing AWS credentials as user data to an EC2 instance?

I have a job processing architecture based on AWS that requires EC2 instances query S3 and SQS. In order for running instances to have access to the API the credentials are sent as user data (-f) in the form of a base64 encoded shell script. For example:
$ cat ec2.sh
...
export AWS_ACCOUNT_NUMBER='1111-1111-1111'
export AWS_ACCESS_KEY_ID='0x0x0x0x0x0x0x0x0x0'
...
$ zip -P 'secret-password' ec2.sh
$ openssl enc -base64 -in ec2.zip
Many instances are launched...
$ ec2run ami-a83fabc0 -n 20 -f ec2.zip
Each instance decodes and decrypts ec2.zip using the 'secret-password' which is hard-coded into an init script. Although it does work, I have two issues with my approach.
'zip -P' is not very secure
The password is hard-coded in the instance (it's always 'secret-password')
The method is very similar to the one described here
Is there a more elegant or accepted approach? Using gpg to encrypt the credentials and storing the private key on the instance to decrypt it is an approach I'm considering now but I'm unaware of any caveats. Can I use the AWS keypairs directly? Am I missing some super obvious part of the API?
You can store the credentials on the machine (or transfer, use, then remove them.)
You can transfer the credentials over a secure channel (e.g. using scp with non-interactive authentication e.g. key pair) so that you would not need to perform any custom encryption (only make sure that permissions are properly set to 0400 on the key file at all times, e.g. set the permissions on the master files and use scp -p)
If the above does not answer your question, please provide more specific details re. what your setup is and what you are trying to achieve. Are EC2 actions to be initiated on multiple nodes from a central location? Is SSH available between the multiple nodes and the central location? Etc.
EDIT
Have you considered parameterizing your AMI, requiring those who instantiate your AMI to first populate the user data (ec2-run-instances -f user-data-file) with their AWS keys? Your AMI can then dynamically retrieve these per-instance parameters from http://169.254.169.254/1.0/user-data.
UPDATE
OK, here goes a security-minded comparison of the various approaches discussed so far:
Security of data when stored in the AMI user-data unencrypted
low
clear-text data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the clear-text http://169.254.169.254/1.0/user-data)
Security of data when stored in the AMI user-data and encrypted (or decryptable) with easily obtainable key
low
easily-obtainable key (password) may include:
key hard-coded in a script inside an ABI (where the ABI can be obtained by an attacker)
key hard-coded in a script on the AMI itself, where the script is readable by any user who manages to log onto the AMI
any other easily obtainable information such as public keys, etc.
any private key (its public key may be readily obtainable)
given an easily-obtainable key (password), the same problems identified in point 1 apply, namely:
the decrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access clear-text http://169.254.169.254/1.0/user-data)
you are vulnerable to proxy request attacks (e.g. attacker asks the Apache that may or may not be running on the AMI to get and forward the encrypted http://169.254.169.254/1.0/user-data, ulteriorly descrypted with the easily-obtainable key)
Security of data when stored in the AMI user-data and encrypted with not easily obtainable key
average
the encrypted data is accessible to any user who manages to log onto the AMI and has access to telnet, curl, wget, etc. (can access encrypted http://169.254.169.254/1.0/user-data)
an attempt to decrypt the encrypted data can then be made using brute-force attacks
Security of data when stored on the AMI, in a secured location (no added value for it to be encrypted)
higher
the data is only accessible to one user, the user who requires the data in order to operate
e.g. file owned by user:user with mask 0600 or 0400
attacker must be able to impersonate the particular user in order to gain access to the data
additional security layers, such as denying the user direct log-on (having to pass through root for interactive impersonation) improves security
So any method involving the AMI user-data is not the most secure, because gaining access to any user on the machine (weakest point) compromises the data.
This could be mitigated if the S3 credentials were only required for a limited period of time (i.e. during the deployment process only), if AWS allowed you to overwrite or remove the contents of user-data when done with it (but this does not appear to be the case.) An alternative would be the creation of temporary S3 credentials for the duration of the deployment process, if possible (compromising these credentials, from user-data, after the deployment process is completed and the credentials have been invalidated with AWS, no longer poses a security threat.)
If the above is not applicable (e.g. S3 credentials needed by deployed nodes indefinitely) or not possible (e.g. cannot issue temporary S3 credentials for deployment only) then the best method remains to bite the bullet and scp the credentials to the various nodes, possibly in parallel, with the correct ownership and permissions.
I wrote an article examining various methods of passing secrets to an EC2 instance securely and the pros & cons of each.
http://www.shlomoswidler.com/2009/08/how-to-keep-your-aws-credentials-on-ec2/
The best way is to use instance profiles. The basic idea is:
Create an instance profile
Create a new IAM role
Assign a policy to the previously created role, for example:
{
"Statement": [
{
"Sid": "Stmt1369049349504",
"Action": "sqs:",
"Effect": "Allow",
"Resource": ""
}
]
}
Associate the role and instance profile together.
When you start a new EC2 instance, make sure you provide the instance profile name.
If all works well, and the library you use to connect to AWS services from within your EC2 instance supports retrieving the credentials from the instance meta-data, your code will be able to use the AWS services.
A complete example taken from the boto-user mailing list:
First, you have to create a JSON policy document that represents what services and resources the IAM role should have access to. for example, this policy grants all S3 actions for the bucket "my_bucket". You can use whatever policy is appropriate for your application.
BUCKET_POLICY = """{
"Statement":[{
"Effect":"Allow",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::my_bucket"]}]}"""
Next, you need to create an Instance Profile in IAM.
import boto
c = boto.connect_iam()
instance_profile = c.create_instance_profile('myinstanceprofile')
Once you have the instance profile, you need to create the role, add the role to the instance profile and associate the policy with the role.
role = c.create_role('myrole')
c.add_role_to_instance_profile('myinstanceprofile', 'myrole')
c.put_role_policy('myrole', 'mypolicy', BUCKET_POLICY)
Now, you can use that instance profile when you launch an instance:
ec2 = boto.connect_ec2()
ec2.run_instances('ami-xxxxxxx', ..., instance_profile_name='myinstanceprofile')
I'd like to point out that it is not needed to supply any credentials to your EC2 instance anymore. Using IAM, you can create a role for your EC2 instances. In these roles, you can set fine-grained policies that allow your EC2 instance to, for example, get a specific object from a specific S3 bucket and no more. You can read more about IAM Roles in the AWS docs:
http://docs.aws.amazon.com/IAM/latest/UserGuide/WorkingWithRoles.html
Like others have already pointed out here, you don't really need to store AWS credentials for an EC2 instance, by using IAM Roles -
https://aws.amazon.com/blogs/security/a-safer-way-to-distribute-aws-credentials-to-ec2/.
I will add that you can employ the same method also for securely storing NON-AWS credentials for you EC2 instance, like say if you have some db credentials you want to keep secure. You save the non-aws credentials on a S3 Bukcet, and use IAM role to access that bucket.
you can find more detailed information on that here - https://aws.amazon.com/blogs/security/using-iam-roles-to-distribute-non-aws-credentials-to-your-ec2-instances/

Resources