I am hosting my standalone Blazor WASM app in an Azure Storage Account Static website and wondering how to handle switching between development and production API endpoints using settings in appsettings.json/appsettings.staging.json. The documentation I've found talks more about App Service hosted apps.
I cannot get this Blazor.start() method to work.
I must admit I haven't tried the option to inject an IConfiguration and use HttpClient but would like to check if there's a simple method.
I can't claim to have come up with this myself but I cannot find my reference right now. Anyway I ended up adding this method to my Program.cs.
Program.cs
private static async Task ConfigureApiEndpoint(WebAssemblyHostBuilder builder)
{
var http = new HttpClient()
{
BaseAddress = new Uri(builder.HostEnvironment.BaseAddress)
};
string apiEndpoint = String.Empty;
if (builder.HostEnvironment.BaseAddress.Contains("localhost"))
apiEndpoint = "api-endpoint-staging.json";
else if (builder.HostEnvironment.BaseAddress.Contains("mysubdomain.domain.com"))
apiEndpoint = "api-endpoint-staging.json";
else
apiEndpoint = "api-endpoint-production.json";
using var response = await http.GetAsync(apiEndpoint);
using var stream = await response.Content.ReadAsStreamAsync();
builder.Configuration.AddJsonStream(stream);
}
api-endpoint-staging.json
Put this in wwwroot folder. Set as Content build action. Do not copy to output.
{
"SirenApi": {
"BaseAddress": "https://mysubdomain.domain.com/api/"
}
}
Related
I have .NET Framework application where I try to read data from AWS parameter store using AmazonSimpleSystemsManagementClient on my local environment. Besides I have credentials generated by AWS CLI and located in
Users/MyUser/.aws
folder. When I try to connect to the parameter store from CMD using the creds it works fine. Though the AmazonSimpleSystemsManagementClient in the application with default constructor, it throws exception "Unable to get IAM security credentials from EC2 Instance Metadata Service." When I tried to pass BasicAWSParameters to the client with hardcoded working keys I got another exception "The security token included in the request is invalid".
Also I tried installing EC2Config, initializing AWS SDK Store from Visual Studio AWS Toolkit. Though it didn't change the game.
I would want to avoid using environment variables or hardcoding the keys since keys are generated and valid only 1 hour. Then I should regenerate so copying them somewhere every time is not convenient for me.
Please advice how to resolve the issue.
Some code
_client = new AmazonSimpleSystemsManagementClient()
public string GetValue(string key)
{
if (_client == null)
return null;
var request = new GetParameterRequest
{
Name = $"{_baseParameterPath}/{key}",
WithDecryption = true,
};
try
{
var response = _client.GetParameterAsync(request).Result;
return response.Parameter.Value;
}
catch (Exception exc)
{
return null;
}
}
credentials file looks as following (I removed key values not to expose):
[default]
aws_access_key_id= KEY VALUE
aws_secret_access_key= KEY VALUE
aws_session_token= KEY VALUE
[MyProfile]
aws_access_key_id= KEY VALUE
aws_secret_access_key= KEY VALUE
aws_session_token= KEY VALUE
As long as you have your creds in .aws/credentials, you can create the Service client and the creds will be located and used. No need to create a BasicAWSParameters object.
Creds in a file named credentials:
[default]
aws_access_key_id=Axxxxxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key=/zxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
This .NET code works.
using System;
using System.Threading.Tasks;
using Amazon.SimpleSystemsManagement;
using Amazon.SimpleSystemsManagement.Model;
namespace ConsoleApp1 {
class Program {
static async Task Main(string[] args) {
var client = new AmazonSimpleSystemsManagementClient();
var request = new GetParameterRequest()
{
Name = "RDSConnection"
};
var response = client.GetParameterAsync(request).GetAwaiter().GetResult();
Console.WriteLine("Parameter value is " + response.Parameter.Value);
}
}
}
I've read a lot of conflicting information about this and it seems people are not 100% clear on what is possible and what is not. I am certain that you cannot host a gRPC server app in IIS due to the HTTP/2 limitations. The documentation is pretty clear. However, I want to use IIS as a reverse proxy, with the internal side communicating using gRPC. So the client would be in IIS, not the server. I assumed that since the communication at this point (i.e. the back end) was not funneled through IIS, there would be no issue with this. However, I keep seeing mixed answers.
I have created a dumb webapp that is hosted in IIS Express and can successfully post to my service running on Kestrel with gRPC.
Client code sample below. The SubmitButton is just a form post on the razor page.
public async void OnPostSubmitButton()
{
// The port number(5001) must match the port of the gRPC server.
using var channel = GrpcChannel.ForAddress("https://localhost:5001");
var client = new Greeter.GreeterClient(channel);
var reply = await client.SayHelloAsync(
new HelloRequest { Name = "GreeterClient" });
Console.WriteLine("Greeting: " + reply.Message);
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
Server code is the boilerplate template for gRPC but looks like this:
namespace grpcGreeter
{
public class Program
{
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
// Additional configuration is required to successfully run gRPC on macOS.
// For instructions on how to configure Kestrel and gRPC clients on macOS, visit https://go.microsoft.com/fwlink/?linkid=2099682
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
}
namespace grpcGreeter
{
public class GreeterService : Greeter.GreeterBase
{
private readonly ILogger<GreeterService> _logger;
public GreeterService(ILogger<GreeterService> logger)
{
_logger = logger;
}
public override Task<HelloReply> SayHello(HelloRequest request, ServerCallContext context)
{
return Task.FromResult(new HelloReply
{
Message = "Hello " + request.Name
});
}
}
}
This works. But, because I keep seeing mixed information saying it that it won't, I am not certain that once I go to deploy the client code (i.e. the reverse proxy), if I will run into problems. I would like to use a host like Azure...but don't know if it's possible or not.
Any clarity on the subject would be greatly appreciated.
As far as I know, we could use asp.net core mvc or razor page application as the client to call the grpc server.
But gRPC client requires the service to have a trusted certificate when you hosted the application on remote server IIS.
If you don't have the permission to install the certificate, you should uses HttpClientHandler.ServerCertificateCustomValidationCallback to allow calls without a trusted certificate.
Notice: this will make the call not security.
Additional configuration is required to call insecure gRPC services with the .NET Core client. The gRPC client must set the System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport switch to true and use http in the server address.
Code as below:
AppContext.SetSwitch(
"System.Net.Http.SocketsHttpHandler.Http2UnencryptedSupport", true);
var httpClientHandler = new HttpClientHandler();
// Return `true` to allow certificates that are untrusted/invalid
httpClientHandler.ServerCertificateCustomValidationCallback =
HttpClientHandler.DangerousAcceptAnyServerCertificateValidator;
var httpClient = new HttpClient(httpClientHandler);
var channel = GrpcChannel.ForAddress("https://localhost:5001",
new GrpcChannelOptions { HttpClient = httpClient });
var client = new Greeter.GreeterClient(channel);
var response = await client.SayHelloAsync(new HelloRequest { Name = "World" });
I'm wondering if there is a way I can setup a .net mvc app the same way a .net core app can be setup with Key Vault.
https://learn.microsoft.com/en-us/aspnet/core/security/key-vault-configuration?view=aspnetcore-2.2#secret-storage-in-the-development-environment
I want to be able to pull secrets in my local development environment without provisioning an Azure Key Vault.
In a .net core app I was able to follow the above link and get everything working.
// ran this command in powershell
dotnet user-secrets set "db-connection-string" "db-connection-string-value"
Setup the key vault with an empty endpoint in Program.cs (this is just for testing).
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((ctx, builder) =>
{
var keyVaultEndpoint = GetKeyVaultEndpoint();
if (!string.IsNullOrEmpty(keyVaultEndpoint))
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(
azureServiceTokenProvider.KeyVaultTokenCallback));
builder.AddAzureKeyVault(
keyVaultEndpoint, keyVaultClient, new DefaultKeyVaultSecretManager());
}
}
).UseStartup<Startup>()
.Build();
private static string GetKeyVaultEndpoint() => "";
And I am able to access my secret locally using the below code.
public void OnGet()
{
Message = "My key val = " + _configuration["db-connection-string"];
}
In my asp.net app I am able to pull secrets from an Azure Key Vault. I just don't want to have to do that when developing locally since each developer could have a slightly different connection string to their local database, and requiring them to use a Key Vault in Azure is going to be annoying and cause frustration. Maybe we shouldn't use Key Vault for "legacy" applications. Any help is appreciated.
This seems similar to our development environment.
We wanted developers to use some values from the key vault, but to be able to override particular values for their local environment. (particularly the database connection string).
In the ConfigureAppConfiguration method you can register multiple configuration providers. If two configuration providers write to the same configuration key, the last registered provider gets precedence.
Knowing this, you can have a local file appSettings.json, which defines the database connection string:
{
"db-connection-string": "Data Source = DEVELOPMENT_PC;Initial Catalog=MyLocalDatabase;Integrated Security=True"
}
Then register this provider after you have registered the key vault provider.
.ConfigureAppConfiguration((ctx, builder) =>
{
var keyVaultEndpoint = GetKeyVaultEndpoint();
if (!string.IsNullOrEmpty(keyVaultEndpoint))
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(
new KeyVaultClient.AuthenticationCallback(
azureServiceTokenProvider.KeyVaultTokenCallback
)
);
builder.AddAzureKeyVault(
keyVaultEndpoint,
keyVaultClient,
new DefaultKeyVaultSecretManager()
);
}
var appSettingsFilePath = GetJsonAppSettingsFile(); // Hardcoded, or add some other configuratin logic
if (!string.IsNullOrEmpty(appSettingsFilePath))
{
builder.AddJsonFile(appSettingsFilePath);
}
}
Obviously, the production environment will have an empty or non-existent appSettings.json file, and all values will be derived from the key store. Or alternatively, you use the appSettings.json file in your production environment, but just for configuration values that do not contain secrets.
I'm trying to load a file on a self-hosted Owin based server.
WebApp.Start<Startup>("http://localhost:3001/");
Here is how I map the path:
var path = HostingEnvironment.MapPath("~");
it always returns null!
On the other hand, if the website is hosted in IIS (or express) the value of path is right.
How can I can populate this value for the Self-Hosted owin?
As owin self-hosting doesn't use iis, instead of virtual path, the absolute path should be used.
var path = HostingEnvironment.MapPath("~");
if (path == null)
{
var uriPath = Path.GetDirectoryName(Assembly.GetExecutingAssembly().GetName().CodeBase);
path = new Uri(uriPath).LocalPath;
}
When running under OWIN under dnx, I found that many of the suggested workarounds around reading the current app domain would return the .dnx runttime folder - not the web app location.
Here's what worked for me inside OWIN startup:
public void Configure(IApplicationBuilder app)
{
app.Use(async (ctx, next) =>
{
var hostingEnvironment = app.ApplicationServices.GetService<IHostingEnvironment>();
var realPath = hostingEnvironment.WebRootPath + ctx.Request.Path.Value;
// do something with the real file -path
await next();
});
I am trying to move from a WebAPI based REST service, to one encompassing the new implimentation of OData. I have the service working correctly, but am at a loss on how create unit tests that will test the odata query options.
when unit testing WebAPI methods, I am used to building the httpRequestMessage and injecting it in the constructure:
var request = new HttpRequestMessage();
request.Headers.Add("UserName", "TestUser");
request.Headers.Add("Password", password);
request.Headers.Add("OverRideToken", "false");
request.Headers.Add("AccessSystem", "Mobile");
request.Headers.Add("Seed", "testSeed");
var token = new Token();
var authController = new AuthorizationController(request);
try
{
var returnValue = authController.Get();
how would I go about injecting the odata request? I need to verify that $filter, $inlinecount, and other options are returning the proper records.
You can either test your controller or you can test against a running instance of your Web API (you should probably do both).
Testing your controller won't achieve what you are trying to do, so you will want to test by creating a self hosted in-memory instance of your Web API application. You can then either use HttpClient in your test classes (you will have to manually construct OData requests), or you can use the WCF Data Services Client in your test classes (this will allow you to query via LINQ).
Here's an example using WCF Data Services Client:
public class ODataContainerFactory
{
static HttpSelfHostServer server;
public static MyApplicationServer.Acceptance.ODataService.Container Create(Uri baseAddress)
{
var config = new HttpSelfHostConfiguration(baseAddress);
// Remove self host requirement to run with Adminprivileges
config.HostNameComparisonMode = System.ServiceModel.HostNameComparisonMode.Exact;
// Register Web API and OData Configuration
WebApiConfig.Register(config);
// Configure IoC
ConfigureIoC(dataSource, config);
// Do whatever else, e.g. setup fake data sources etc.
...
// Start server
server = new HttpSelfHostServer(config);
server.OpenAsync().Wait();
// Create container
var container = new MyApplicationServer.Acceptance.ODataService.Container(new Uri(baseAddress.ToString() + "odata/"));
// Configure container
container.IgnoreResourceNotFoundException = true;
container.IgnoreMissingProperties = true;
return container;
}
private static void ConfigureIoC(MockDatasource dataSource, HttpSelfHostConfiguration config)
{
var container = new UnityContainer();
container.RegisterType<TypeA, TypeB>();
...
...
config.DependencyResolver = new IoCContainer(container);
}
public static void Destroy()
{
server.CloseAsync().Wait();
server.Dispose();
}
}
The key here is the WebApiConfig.Register(HttpConfiguration config) method call, which is calling your Web API project.
Note that prior to the above you will need:
Fire up your Web API project
In your test class add a Service Reference to your OData root path.
This will create a Container object (in the example above MyApplicationServer.Acceptance.ODataService.Container), which you can use to query your OData feed in your tests as follows:
var odataContainer = ODataContainerFactory.Create(new Uri("http://localhost:19194/");
var result = odataContainer.MyEntities
.Expand(s => s.ChildReferenceType)
.Where(s => s.EntityKey == someValue).SingleOrDefault();