This is a question about the project https://github.com/DiUS/pact-jvm.
Problem
When I am validating pacts I need to be able to use client side authentication, as the providers actually require client side authentication. I'll prefix what I am saying with a declaration that I am not very familiar with groovy: I mostly program in scala, java or javascript. Having looked at the code I think that client side authentication is not currently supported, so I'd like to make a pull request with that support in it.
What I've done so far
I have managed to get Https working with a truststore: I copied the HttpTarget and created a HttpsTarget, and in the HttpsTarget specified the truststore in the providerinfo. Unfortunately looking at the code there doesn't seem to be a way of specifying the client certificate, so I need to change the providerinfo class to be able to specify where it is (in the same way the the truststore is provided).
My problem is that I've got the code compiling using the advice in the 'for contributors', but when I publish locally, I am only publishing for scala version 2_12. Because of version issues and binary incompatibilities between scala versions, I need to publish to scala 2_11. My skills with gradle are even less than my skills with groovy. I've done a search for all the references to scalaVersion, and found that there is quite a lot of logic around it, but I've not managed to track down where it is specified.
Question
If I can use client side authentication with the current pact validator could you let me know. If not, could you tell me how to publish the project with support for scala 2_11?
Thanks
In the end I made my own Http Target. My need is to run from Junit, not the general case, and this is good enough:
public class HttpsTarget extends HttpTarget {
public HttpsTarget(final int port) {
super("https", "localhost", port, "/", false);
}
static class HttpsClientFactory implements IHttpClientFactory {
#NotNull
#Override
public CloseableHttpClient newClient(Object o) {
SSLContext sslContext = // put here code to make ssl context
CloseableHttpClient httpClient = HttpClients
.custom()
.setSSLContext(sslContext)
.build();
return httpClient;
}
}
#Override
public void testInteraction(final String consumerName, final Interaction interaction, PactSource source) {
ProviderInfo provider = getProviderInfo(source);
ConsumerInfo consumer = new ConsumerInfo(consumerName);
ProviderVerifier verifier = setupVerifier(interaction, provider, consumer);
Map<String, Object> failures = new HashMap<>();
ProviderClient client = new ProviderClient(provider, new HttpsClientFactory());
verifier.verifyResponseFromProvider(provider, interaction, interaction.getDescription(), failures, client);
reportTestResult(failures.isEmpty(), verifier);
try {
if (!failures.isEmpty()) {
verifier.displayFailures(failures);
throw getAssertionError(failures);
}
} finally {
verifier.finialiseReports();
}
}
}
Related
The pact-jvm-provider-spring states that for junit5 provider test, it is not required to use the spring library.
However, #PactBroker annotation depends on the system properties. Is there a way to get this working for application properties via the Spring Property Resolver. I tried to create something similar to SpringEnvironmentResolver.kt and used it in the context setup. But that did not work.
#Provider("api-provider-app")
#PactBroker
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
#ActiveProfiles("test")
public class PactVerificationTest {
#LocalServerPort
private int port;
#Autowired
private Environment environment;
#TestTemplate
#ExtendWith(PactVerificationInvocationContextProvider.class)
void testTemplate(Pact pact, Interaction interaction, HttpRequest request,
PactVerificationContext context) {
context.setTarget(new HttpTestTarget("localhost", port));
context.setValueResolver(new SpringResolver(environment));
context.verifyInteraction();
}
}
I get the following error
Invalid pact broker host specified ('${pactbroker.host:}'). Please provide a valid host or specify the system property 'pactbroker.host'.
Update
After some more searching found out that the setTarget was not working and that needs to be moved to #BeforeEach method.
#BeforeEach
void setContext(PactVerificationContext context) {
context.setValueResolver(new SpringResolver(environment));
context.setTarget(new HttpTestTarget("localhost", port));
}
The following snippet helped it work with #PactFolder annotation. But the #PactBroker with properties is still not working
There is a new module added to Pact-JVM that extends the JUnit5 support to allow values to be configured in the Spring Context. See https://github.com/DiUS/pact-jvm/tree/master/provider/pact-jvm-provider-junit5-spring. It will be released with the next version of Pact-JVM, which will be 4.0.7.
When I access an ASP.NET web api using angular from the same web site, all the first letter lowercase, even if they aren't lowercase on the server.
However, if I move the API's to a different web site and enable CORS, I will receive the JSON exactly as the properties are written on the server.
Is there some way to control this difference? It becomes a mess when I need to move an API to a different web site
The default for serializing output to JSON has changed so you may encounter this issue when migrating .Net frameworks. You can read more about the issue here or here but to resolve you can specify the DefaultContractResolver.
public void ConfigureServices(IServiceCollection services)
{
// Add framework services.
services.AddMvc()
.AddJsonOptions(options => options.SerializerSettings.ContractResolver = new DefaultContractResolver());
}
In your Owin Startup add this line...
public class Startup
{
public void Configuration(IAppBuilder app)
{
var webApiConfiguration = ConfigureWebApi();
app.UseWebApi(webApiConfiguration);
}
private HttpConfiguration ConfigureWebApi()
{
var config = new HttpConfiguration();
// ADD THIS LINE HERE AND DONE
config.Formatters.JsonFormatter.SerializerSettings.ContractResolver = new CamelCasePropertyNamesContractResolver();
config.MapHttpAttributeRoutes();
return config;
}
}
JSON is case sensitive as a rule JSON keys should have matching case. Some unix servers will ignore case sensitivity but I believe windows servers enforce it.
The error is in the naming conventions in the api or code requesting/processing those keys. Best to use lowercase keys using a under_score as the seperator instead of camelCasing.
http://jsonrpc.org/historical/json-rpc-1-1-alt.html#service-procedure-and-parameter-names
ref: When is case sensitivity important in JSON requests to ASP.NET web services (ASMX)?
How can I check inside the application if it is being hosted in IIS?
Check if the environment variable APP_POOL_ID is set.
public static bool InsideIIS() =>
System.Environment.GetEnvironmentVariable("APP_POOL_ID") is string;
All of environment variables that iis sets on a child process
I've tried the answer by Branimir Ričko but found that it's not correct: this environment variable is also set when running under IIS express.
So here is my modified version:
static bool IsRunningInsideIIS() =>
System.Environment.GetEnvironmentVariable("ASPNETCORE_HOSTINGSTARTUPASSEMBLIES") is string startupAssemblies &&
startupAssemblies.Contains(typeof(Microsoft.AspNetCore.Server.IISIntegration.IISDefaults).Namespace);
I believe there is no direct way how to achieve that out of the box. At least I haven't found one. And the reason, as I can tell is the fact ASP.NET Core application is actually a self-contained application knowing nothing about it's parent context, unless the later will reveal information about itself.
For example in the configuration file we can tell which type of the installation we're running: production or development. We can assume that production is IIS, while development is not. However that didn't worked for me. Since my production setup could be either IIS or windows service.
So I have worked around this problem by supplying different command line arguments to my application depending on type of run it supposed to perform. That, actually, came naturally for me, since windows service indeed requires different approach to run.
For example in my case code looked somewhat like so:
namespace AspNetCore.Web.App
{
using McMaster.Extensions.CommandLineUtils;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Hosting.WindowsServices;
using System;
using System.Diagnostics;
using System.IO;
public class Program
{
#region Public Methods
public static IWebHostBuilder GetHostBuilder(string[] args, int port) =>
WebHost.CreateDefaultBuilder(args)
.UseKestrel()
.UseIISIntegration()
.UseUrls($"http://*:{port}")
.UseStartup<Startup>();
public static void Main(string[] args)
{
var app = new CommandLineApplication();
app.HelpOption();
var optionHosting = app.Option("--hosting <TYPE>", "Type of the hosting used. Valid options: `service` and `console`, `console` is the default one", CommandOptionType.SingleValue);
var optionPort = app.Option("--port <NUMBER>", "Post will be used, `5000` is the default one", CommandOptionType.SingleValue);
app.OnExecute(() =>
{
//
var hosting = optionHosting.HasValue()
? optionHosting.Value()
: "console";
var port = optionPort.HasValue()
? new Func<int>(() =>
{
if (int.TryParse(optionPort.Value(), out var number))
{
// Returning successfully parsed number
return number;
}
// Returning default port number in case of failure
return 5000;
})()
: 5000;
var builder = GetHostBuilder(args, port);
if (Debugger.IsAttached || hosting.ToLowerInvariant() != "service")
{
builder
.UseContentRoot(Directory.GetCurrentDirectory())
.Build()
.Run();
}
else
{
builder
.UseContentRoot(Path.GetDirectoryName(Process.GetCurrentProcess().MainModule.FileName))
.Build()
.RunAsService();
}
});
app.Execute(args);
}
#endregion Public Methods
}
}
This code not only allows select type of the hosting (service and console — the option that IIS supposed to use), but also allows to change port which is important, when you're running as Windows service.
Another good thing is usage of argument parsing library, McMaster.Extensions.CommandLineUtils — it will show information about configured command line switches, so it would be easy to select right values.
I am using Microsoft SignalR in order to push notification to browsers. Those notifications are triggered by action from other browsers. I want to make a background task which send notification sometimes. For example, at 12:45:21 i want to fire a notification to all connected users, even if they are doing nothing. Is it possible to do that ?
SignalR doesn't give you the ability to run a background task, but if you run are running a background task, there is nothing to stop your task using your SignalR hub to invoke client methods and send any desired notification.
To launch and control your background task, Hangfire is a flexible library that should help.
Edit to add: Since you've clarified you want to do this in a windows service, another prominent library to assist with building and deploying services is TopShelf
Edit to add: Also, I gather from your comment that you're trying to understand how to access the hub object from your background task? There are many ways to do this, but to improve testability and maintainability of your program, I recommend using an IoC (Inversion of Control) container, and injecting the necessary references - this tutorial: Dependency Injection in SignalR has a walkthrough using the Ninject library. That walkthrough is oriented towards asp.net hosting, but the link you found should help with adapting to a windows service.
If you are using asp.net core 2.1, this is now possible using BackgroundService/IHostedService
https://github.com/davidfowl/UT3/blob/fb12e182d42d2a5a902c1979ea0e91b66fe60607/UTT/Scavenger.cs#L9-L40
(Contents below)
public class Scavenger : BackgroundService
{
private readonly IHubContext<UTTHub> _hubContext;
private readonly ILogger<Scavenger> _logger;
public Scavenger(IHubContext<UTTHub> hubContext, ILogger<Scavenger> logger)
{
_hubContext = hubContext;
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
// Mark games that haven't played in a while as completed
var changed = Game.MarkExpiredGames();
// Mark completed games as removed
var removed = Game.RemoveCompletedGames();
if (removed > 0)
{
_logger.LogInformation("Removed {GameCount} games.", removed);
}
if (removed > 0 || changed)
{
await _hubContext.Clients.All.SendAsync("GameUpdated", Game.GetGames());
}
await Task.Delay(5000);
}
}
}
}
Also see this
https://github.com/aspnet/Docs/issues/8925
I am pretty new to NUnit (and automated testing in general). I have recently done some Ruby On Rails work and noticed that in my test suite, when I create objects (such as a new user) and commit them during course of the suite, they are never committed to the database so that I can run the test over and over and not worry about that user already existing.
I am now trying to accomplish the same thing in NUnit, but I am not quite sure how to go about doing it. Do I create a transaction in the Setup and Teardown blocks? Thanks.
Why would you talk to the database during unit-tests? This makes your unit-test to integration-tests by default. Instead, create wrappers for all database communication, and stub/mock it during unit-tests. Then you don't have to worry about database state before and after.
Now, if you are not willing to that level of refactoring: The problem with transactions is that you need an open connection. So, if your method targeted for testing handles all communication on its own, it is really difficult to inject a transaction that you can create at setup and roll back at teardown.
Maybe you can use this. It is ugly, but perhaps it can work for you:
namespace SqlServerHandling
{
[TestFixture]
public sealed class TestTransactionRollBacks
{
private string _connectionString = "Data Source = XXXDB; ; Initial Catalog = XXX; User Id = BLABLA; Password = BLABLA";
private SqlConnection _connection;
private SqlTransaction _transaction;
[SetUp]
public void SetUp()
{
_connection = new SqlConnection(_connectionString);
_transaction = _connection.BeginTransaction();
}
[TearDown]
public void TearDown()
{
_transaction.Rollback();
}
[Test]
public void Test()
{
Foo foo = new Foo(_connection);
object foo.Bar();
}
}
internal class Foo
{
private readonly SqlConnection _connection;
object someObject = new object();
public Foo(SqlConnection connection)
{
_connection = connection;
}
public object Bar()
{
//Do your Stuff
return someObject;
}
}
I agree with Morten's answer, but you might want to look at this very old MSDN Magazine article on the subject: Know Thy Code: Simplify Data Layer Unit Testing using Enterprise Services
I use SQLite for unit tests, using NHibenate. Even if you're not using NHibernate it should be possible to do. SQLite has an in memory mode, where you can create a database in memory and persist data there. It is fast, works well, and you can simply throw away and recreate the schema for each test or fixture as you see fit.
You can see the example from Ayende's blog for an overview of how its done. He is using NHibernate, but the concept should work with other ORM or a straight DAL as well.