Data Protection using Entity Framework Core - asp.net-core-webapi

So I have followed microsoft's official guide (https://learn.microsoft.com/el-gr/aspnet/core/security/data-protection/implementation/key-storage-providers?view=aspnetcore-2.2&tabs=visual-studio) for encrypting data and storing them in database using Entity Framework Core, but I can't make it work accross multiple machines. So I used Entity Framework Core implementation because in the guide says "With this package, keys can be shared across multiple instances of a web app.". The app works perfectly when using it from the deployed version for example xyz.com, but It doesn't let me interfere from localhost. Will it be a problem afterwards when my virtual machine is maxed out and I want to add another one? If so how can I make it work in both the deployed site and different machines? There is no tutorial which implements that, I have searched everywhere. Thank you very much.
services.AddDataProtection()
.UseCryptographicAlgorithms(
new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256,
}
).PersistKeysToDbContext<DataContext>();
Update 12-6-2019
So I followed microsoft's documentation (https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/implementation/key-encryption-at-rest?view=aspnetcore-2.2)
and it states:
"If the app is spread across multiple machines, it may be convenient to distribute a shared X.509 certificate across the machines and configure the hosted apps to use the certificate for encryption of keys at rest"
I generated a x.509 certificate using this tutorial:
(https://www.youtube.com/watch?v=1xtBkukWiek)
My updated code:
services.AddDataProtection()
.UseCryptographicAlgorithms(
new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256,
}
)
// )
.ProtectKeysWithCertificate(new X509Certificate2("wibit-test-cert.pfx", "password"))
.PersistKeysToDbContext<DataContext>();
When testing on my local machine it works fine, but when I deploy it, I get this error:
error: "The system cannot find the file specified"
I have tried several ways to fix it including _hostingEnvironment.ContentRootPath or WebRootPath. Both these ways and the one I use in the updated code work in my machine but not in the deployed app.
Any clues?

I finally fixed it!
The problem was that I didn't set the application name:
.SetApplicationName("myapp")
And I changed the path of the certificate to this:
.ProtectKeysWithCertificate(new X509Certificate2(Path.Combine(_hostingEnvironment.ContentRootPath,"wibit-test-cert.pfx"), "password"))
Also it may be a permission problem, because when I hosted the app in A2Hosting it could't find the file specified(wibit-test-cert.pfx), but when I deployed in GCP Cloud it worked!
Now I can encrypt and decrypt data using the same database with different apps.
So my final code is this:
services.AddDataProtection()
.UseCryptographicAlgorithms(
new AuthenticatedEncryptorConfiguration()
{
EncryptionAlgorithm = EncryptionAlgorithm.AES_256_CBC,
ValidationAlgorithm = ValidationAlgorithm.HMACSHA256,
}
)
.SetApplicationName("myapp")
.ProtectKeysWithCertificate(new X509Certificate2(Path.Combine(_hostingEnvironment.ContentRootPath,"wibit-test-cert.pfx"), "password"))
.PersistKeysToDbContext<DataContext>();

Related

Azure function runtime unreachable using app settings

I have an azure function app ( time trigger functions ) which is giving "Azure Function runtime is unreachable" error. We are using appsetting.json in place of local.host.json and the variables are configured in devops pipeline. I can see that the variables gets updated in azure function app files, However function is not executing yet it shows some memory consumption every 3o mins. Please share your suggestions to fix this. Thanks !
As far as I Know,
Azure Functions Runtime is unreachable comes if the function app is blocked by firewall or storage account configuration is incorrect in the Connection String.
I found few solved issues in the SO for similar errors in the context of Azure Functions deployed through Azure DevOps Pipelines where user #JsAndDotNet solved here the above error by correcting the Platform value in Configuration menu of deployed Azure Function App in the portal.
Another user #DelliganeshSevanesan solved this error in the Context of Azure Functions Deployment using IDEs 70934637 where multiple reasons are given as a check to this error along with the resolutions.
Also, I have observed we need to add Azure Function App Configuration Settings in the Pipeline with the form of Key-Value Pair, which is also known as transformation of the local.settings.json of Azure Function App to the Azure CI/CD Pipelines. For this, I have found the practical workarounds given by #VijayanathViswanathan and #Sajid in these SO Issue 1 & 2.

On-prem ASP.NET Framework web app with Azure Key Vault

We're in the process of trying to secure our application secrets in our internal ASP.NET Framework web applications. The initial plan offered to me was to use Azure Key Vault. I began development work using my Visual Studio Enterprise subscription, and that seems to work fine, locally.
We've created a second Key Vault in our company's production environment, and again, I can use it locally, because my own AAD account has access to the vault. However, in this project (4.7.2 Web Forms web application), I don't see any means of specifying the Access Policy principal that we've created for the application.
My google-fu is failing me: is there any documentation that explains how to do this? Is this scenario -- an on-prem, ASP.NET Framework app outside of the Azure environment, accessing Key Vault for confiugation values -- even possible?
Thanks.
UPDATE: I was unable to find a solution that would allow me to use the Access Policy principal from within the "Add Connected Service" dialog. I'm somewhat surprised it's not in there, or is hidden enough to elude me. So I ended up writing my own Key Vault Secret-Reader function, similar to the marked answer. Hope this helps someone...
In this scenario, your option is to use the service principal to access the keyvault, please follow the steps below, my sample get the secret from the keyvault.
1.Register an application with Azure AD and create a service principal.
2.Get values for signing in and create a new application secret.
3.Navigate to the keyvault in the portal -> Access policies -> add the correct secret permission for the service principal.
4.Then use the code below, replace the <client-id>, <tenant-id>, <client-secret> with the values got before.
using System;
using Microsoft.Azure.KeyVault;
using Microsoft.Azure.Services.AppAuthentication;
namespace test1
{
class Program
{
static void Main(string[] args)
{
var azureServiceTokenProvider = new AzureServiceTokenProvider("RunAs=App;AppId=<client-id>;TenantId=<tenant-id>;AppKey=<client-secret>");
var kv = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
var secret = kv.GetSecretAsync("https://keyvaultname.vault.azure.net/", "mySecret123").GetAwaiter().GetResult();
Console.WriteLine(secret);
}
}
}

Hub inheritance in Signalr in ASP.NET Framework

I'm using signalR to build a real-time website.
I have 2 hub:
NotificationHubCore
NotificationHub (inherits NotificationHubCore)
My solution includes 2 small projects : Domain & Web.
I put NotificationHubCore in Domain, NotificationHub in Web.
Now, in the web section, I want to acess NotificationHubCore by using :
GlobalHost.ConnectionManager.GetHubContext<NotificationHubCore>();
It always returns null to me.
My question is : how can I access to NotificationHubCore through NotificationHub.
I've tried:
var notificationHub = new NotificationHub();
GlobalHost.DependencyResolver.Register(typeof(NotificationHubCore), () => notificationHub);
But that way didn't work.
Can anyone help me please?
Thank you,
You may use SQL Server to distribute messages across a SignalR application that is deployed in two separate applications.
Create a new database for the backplane to use. You can give the database any name. You don’t need to create any tables in the database; the backplane will create the necessary tables.
Refer this article and section "Scaleout with SQL Server" for further details - why SignalR

Multiple applications in the same Symfony2 application

This is quite a long question, but there's quite a lot to it.
It feels like it should be a reasonably common use case, so I'm hoping the Stack Overflow community can provide me with a 'best practice in Symfony2' answer.
The solution I describe below works, but there are several consequences I'd like to avoid:
In my local dev environment, if I have used the wrong db connection the test will work in dev but fail on production
The routes of the ADMIN API are accessible on the PUBLIC API url, just denied.
If I have a mirror of live in my dev environment (3 separate checkouts with the corresponding parameters.yml file) then the feature tests for the other bundles fail
Is there a 'best practice in Symfony2' way to set up my project?
We're running a LAMP stack. We use git/(Atlassian) stash for version control.
We're using doctrine for the ORM and FOS-REST with OAuth plus symfony firewalls to authenticate and authorise the users.
We're committed to use Symfony2, so I am trying to find a 'best practice' solution:
I have a project with 3 applications:
A public-facing API (which gives read-only access to the data)
A protected API (which provides admin functionality)
A set of batch processes (to e.g. import data and monitor data quality)
Each application uses a set of shared models.
I have created 4 bundles, one each for the application and a 4th for the shared models.
Each application must use a different database user to access the database.
There's only one database.
There's several tables, one is called 'prices'
The admin API only must be accessible from one hostname (e.g. admin-api.server1)
The public API only must be accessible from a different hostname (e.g. public-api.server2)
Each application is hosted on a different server
In parameters.yml in my dev environment I have this
// parameters.yml
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: user2
api_admin_db_pass: pass2
batch_db_user: user3
batch_db_pass: pass3
In config.yml I have this:
// config.yml
doctrine:
dbal:
connections:
api_public:
user: "%api_public_db_user%"
password: "%api_public_db_pass%"
api_admin:
user: "%api_admin_db_user%"
password: "%api_admin_db_pass%"
batch:
user: "%batch_db_user%"
password: "%batch_db_pass%"
In my code I can do this (I believe this can be done from the service container too, but I haven't got that far yet)
$entityManager = $this->getContainer()->get('doctrine')->getManager('api_public');
$entityRepository = $this->getContainer()->get('doctrine')->getRepository('CommonBundle:Price', api_admin');
When I deploy my code to each of the live servers, I put junk values in the parameters.yml for the other applications
// parameters.yml on the public api server
api_public_db_user: user1
api_public_db_pass: pass1
api_admin_db_user: **JUNK**
api_admin_db_pass: **JUNK**
batch_db_user: **JUNK**
batch_db_pass: **JUNK**
I have locked down my application so that the database isn't accessible (and thus the other API features don't work)
I have also set up Symfony firewall security so that the different routes require different permissions
There's also security in the apache vhost to deny access to say the admin api path from the public api directory.
So, I have secured my application and met the requirement of the security audit, but the dev process isn't ideal and something feels wrong.
As background:
We have previously looked at splitting it up into different applications within the same project (like this Symfony2 multiple applications and api centric application. Actually followed this method http://jolicode.com/blog/multiple-applications-with-symfony2) , but ran into difficulties, and in any case, Fabien says not to (https://groups.google.com/forum/#!topic/symfony-devs/yneojUuFiqw). That this existed in Symfony1 and was removed in Symfony2 is enough of an argument for me.
We have previously gone down the route of splitting up each bundle and importing it using composer, but this caused too many development overheads (for example, having to modify many repositories to implement a feature; it not being possible to see all of the changes for a feature in a single pull request).
We are receiving an ever growing number of requests to create APIs, and we're similarly worried about putting each application in its own repository.
So, putting each of the three applications in a separate Symfony project / git repository is something we want to avoid too.

Difference between starting process from Console applciation and ASP.NET application

I have a Web API application that needs to run a Python script which in turn runs a Perl script:) does some otehr stuff and get the output results from it.
The way I do this is with starting a Process:
var start = new ProcessStartInfo()
{
FileName = _pythonPath, //#"C:\Python27\python.exe",
Arguments = arguments, //#"D:\apps\scripts\Process.py
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true
};
using (Process process = Process.Start(start))
{
using (StreamReader reader = process.StandardOutput)
{
var result = reader.ReadToEnd();
var err = process.StandardError.ReadToEnd();
process.WaitForExit();
return result;
}
}
The script inside tries to connect to Perforce server using P4 Python API and then Perl script call a P4 command as well. When running this code from Console application, everything goes fine. The program automatically gets the Perforce settings (I've got a P4V client with all the settings specified). But when running from ASP.NET Web API, it doesn't get the settigns and says that it cannot conenct to perforce:1666 server (I guess this is the standard value when no settign specified).
I do understand that not so many people use Perforce, especially in such way and can help here, but would like to know what is the difference between running this script from Console app and Web API app that mich cause this different behaviour.
One of the most obvious differences between running code from a console application and running code in IIS* is that, usually, the two pieces of code will be running under different user accounts.
Frequently, if you're experiencing issues where code works in one of those situations and not the other, it's a permissions or a per-user-settings issue. You can verify whether this is the case by either running the console application under the same user account that is configured for the appropriate IIS application pool, or vice verse, configure the application pool to use your own user account, and then see whether the problem persists.
If you've confirmed that it's a permissions issue and/or per-user-settings, then you need to decide how you want to fix it. I'd usually recommend not running IIS application pools under your own user account - if you cannot seem to get the correct settings configured for the existing application pool user, I'd usually recommend creating a separate user account (either locally on the machine or as part of your domain, depending on what's required) and giving it just the permissions/settings that it requires to do the job.
*IIS Express being the exception here because it doesn't do application pools and the code does end up running under your own user account.

Resources