I've just created custom ASP.NET Membership, Role, Profile and Session State providers, and I was wondering whether there exists a test suite or something similar to test the implementation of the providers. I've checked some of the open source providers I could find (like the NauckIt.PostgreSQL provider), but neither of them contained unit tests, and all of the forum topics I've found mentioned only a few test cases (like checking whether creating a user works), but this is clearly not a complete test suite for a Membership provider. (And I couldn't find anything for the other three providers)
Are there more or less complete test suites for the above mentioned providers, or are there custom providers out there that have at least some testing avaialable?
The short answer is that there is no such open test suite for providers.
Perhaps you can start one on codeplex...
Related
I'd like to create a set of automated tests that could run in a CI/CD pipeline. I'm struggling to understand how to verify the Generate and Validate Tokens portion of the "Sign in with Apple" flow (REST API implementation):
How can I verify that I'm properly handling the exchange of an authorization code for a refresh token? Considering that the authorization code is single-use and only valid for five mins, which in turns comes from authenticating. In my case authenticating requires 2FA.
END TO END TESTS
A common starting point is to perform UI tests to verify logins in a basic way, in technologies such as Selenium:
These will automatically sign in test user accounts, to perform real logins and exchange of the authorization code for tokens.
After login the UI can proceed to test the application logic, such as calling real APIs using real tokens.
COMPONENTS UNDER TEST
Sometimes though, the OAuth related infrastructure gets in the way, eg if it is not possible to automate 2FA actions such as typing in a one time password.
It is possible when working with this type of technology to mock the Identity system. One option can be to pretend Apple authentication has completed, while issuing your own mock tokens with a JWT library, with the same properties as the Apple ones.
A key behaviour of course is to ensure that zero code is changed in UIs or APIs, so that they continue to run the same production logic, with no awareness that they are using mock tokens.
HTTP MOCK ENDPOINTS
The open source Wiremock tool can be a useful addition to your toolbox in this case, as in these API focused tests of mine. To use this type of tool, an automated test stage of the pipeline would need to repoint UIs and / or APIs to a URL that you are pretending represents Apple's identity system. So deployment work would be needed.
DESIGNING THE INFRASTRUCTURE
As always of course, it depends what you want to focus on testing, and which areas you are happy to mock. I recommend thinking this through end to end, thinking about both UIs and APIs. The important thing is to avoid situations where you are blocked and unable to test.
Sorry no code here because I am looking for a better idea or if I am on the right track?
I have two websites, lets call them A and B.
A is a website exposed to the internet and only users with valid account can access.
B is a internal (intranet) website with (Windows authentication using Active directory). I want Application B (intranet) to create users for Application A.
Application A is using the inbuilt ASP.NET JWT token authentication.
My idea is to expose a Api on the extranet website (A) and let (B) access this API. I can use CORS to make sure only (B) has access to the end point but I am not sure if this is a good enough protection? We will perform security penetrations test from a third party company so this might fail the security test?
Or
I can use entity framework to a update the AspnetUsers table manually. Not idea if this is feasible or the right way or doing things.
Any other solution?
In my opinion, don't expose your internal obligations with external solutions like implementing APIs etc ...
Just share the database to be accessible for B. In this way, the server administration is the only security concern and nobody knows how you work. In addition, It's not important how you implement the user authentication for each one (whether Windows Authentication or JWT) and has an independent infrastructure.
They are multiple solution to this one problem. It then end it really depends on your specific criteria.
You could go with:
B (intranet) website, reaching into the database and creating user as needed.
A (internet) website, having an API exposing the necessary endpoint to create user.
A (internet) website, having data migration running every now and then to insert users.
But they all comes with there ups and downs, I'll try to break them down for you.
API solution
Ups:
Single responsibility, you have only one piece of code touching this database which makes it easier to mitigate side effect
it is "future proof" you could easily have more services using this api.
Downs:
Attack surface increased, the API is on a public so subject to 3rd parties trying to play with it.
Maintain API as the database model changes (one more piece to maintain)
Not the fastest solution to implement.
Database direct access
Ups:
Attack surface minimal.
Very quick to develop
Downs:
Database model has to be maintained twice
migration + deployment have to be coordinated, hard to maintain.
Make the system more error prone.
Migration on release
Ups:
Cheapest to develop
Highest performance on inserts
Downs:
Not flexible
Very slow for user
Many deployment
Manual work (will be costly over time)
In my opinion I suggest you go for the API, secure the API access with OAuth mechanism. It OAuth is too time consuming to put in place. Maybe you can try some easier Auth protocols.
How to add Azure custom Policy for Azure Data Factory to only use Azure Key Vault during the Linked Service Creation for fetching the Data Store Credentials instead of credentials being put up directly in ADF Linked Service. Please suggest ARM or PowerShell methods for the policy implementation.
As of yesterday, the Data Factory Azure Policy integration is available which means you can now find some built-in policies that can be assigned to ADF.
One of those is exactly what you're asking for as you can see in the image below. You can find more information here
Edit: Based on your comment, I'm editing this answer with the info you want. When it comes to custom policies, it's pretty much up to you to come up with them and create what fits your needs. In your particular case, I've created one policy that does what you want, please see here.
This policy will audit your data factory linked services and check if they're using a self-hosted integration runtime. Currently, that check is only done for a few types of integration runtimes (if you look at the policy, you can see 5 of them) which means that if you want to check more types of linked services, you'll need to add them to the list of allowed values and select them when assigning the policy definition.
Bear in mind that for some linked services types, such as Key Vault, that check won't make sense since that service can't use a self-hosted IR
Knowing full well, there are many types of workflows for different ways of integrating Pact, I'm trying to visualize what a common work flow looks like. I developed this Swimlane for Pact Broker Workflow.
How do we run a Provider verification on an older Provider build?
How does this change with tags?
When does the webhook get created back to the Provider?
What if different Providers have different base urls (i.e. build systems)?
How does a new Provider build alert about the Consumers if the Provider fails?
Am I thinking about this flow correctly?
I've tried to collect my understanding from Webhooks, Using pact where the consumer team is different from the provider team, and Publishing verification results to a Pact Broker . Assuming I am thinking about the problem the right way and did not completely miss some documentation, I'd gladly write up an advise work flow documentation for the community.
Your swimlane diagram is a good picture of the workflow, with the caveat that once everything is all set up, it's rare to manually start provider builds from the broker.
The provider doesn't ever notify the consumers about verification failure (or success) in the process. If it did, then you could end up with circular builds.
I think about it like this:
The consumer tests create a contract (the Pact file).
This step also verifies that the consumer can work with a provider that fulfils that contract (using the mock provider).
Then, the consumer gives this Pact file to the broker (if configured to do so)
Now that there's a new pact, the broker (if configured) can trigger a provider build
The provider's CI infrastructure builds the provider, and runs the pact verification
The provider's CI infrastructure (if configured) tells the broker about the verification result.
The broker and the provider's build system are the only bits that know about the verification result - it isn't passed back to the consumer at the moment.
A consumer that is passing the tests means the consumer can say "I've written this communication contract and confirmed that I can hold up my side of it". Failure to verify the contract at the provider end doesn't change this statement.
However, if the verification succeeds, you may want to trigger a consumer deployment. As Beth Skurrie (one of the primary contributors to Pact) points out in the comments below:
Communicating the status of the verification back to the consumer is actually a highly important thing, as it tells the consumer whether or not they can be deployed safely. It is the missing part of the pact workflow at the moment, and I'm working away as fast as I can to rectify this.
Currently, since the verification status is information you might like to know about - especially if you're unable to see the provider's CI infrastructure - you might like to check out the pact build badges, which are a lighter way of checking the broker.
I work for a Canadian government department, and our group uses primarily tools from Microsoft, including Visual Studio. We need to carry out load-testing on one of our department's web applications. I have no prior experience with load-testing, but from what I understand, this would entail creating web performance tests recording various testing scenarios, and then creating load tests pointing to these web performance tests.
One complication is that our application relies on an external authentication service, a service used by other applications (and other departments). Our service agreement with this service provider explicitly stipulates that we not subject the service to load-testing.
So we'll need to find a way to bypass the authentication mechanism to carry-out our load-testing. Here's the outline of one strategy a colleague and I came up with:
Log-in normally to the web site, going through the authentication
service as normal.
Use developer tools installed in the browser to capture the cookie(s) created when authenticating
Create a web performance test, and add some code to the web performance test to use the cookie(s), and thereby use the session I
had established when logging in manually.
But I'm not entirely confident that this is the right approach. And even if it is - I have no prior experience with creating web performance tests or load tests, so I'm a bit lost as to go about programmatically loading a cookie inside a web performance test.
Does anyone have any suggestions?
I would break down the task into smaller pieces. If your main job is to load test the application, I would set it up on the internal network with Windows authentication or anonymous authentication, and modify the application to avoid having to deal with that part of the problem.
For the authentication piece of the problem, try set it up so a single static cookie will work every time. (If you need thousands of distinct user cookies, this becomes a bigger job, of course.)
See here for a discussion of the Apache JMeter cookie manager.
I would ask if the authentication could be stubbed out. Instead of calling the 3rd party, call a stub application which will return the equivalent responses. That way, instead of stressing the 3rd party, it's only your (self-hosted) stub that is affected.
This is the opposite of not having a front-end application; in which case a test harness would be required to emulate the front-end. A stub is the equivalent for emulating a back-end application.