#Transactional doesn't work in #QuarkusTest - integration-testing

Data created inside a test method is visible to other tests even when #Transactional annotation is used for test method or test class.
#QuarkusTest
#Transactional
Sample application is available here

Not an answer you will like but that's something we don't support right now.
Changes done in a test are not rollbacked.
I thought we had an issue for that but I couldn't find an obvious one apart from this old one when we, among other things, discuss this subject: https://github.com/quarkusio/quarkus/issues/461 (this comment in particular: https://github.com/quarkusio/quarkus/issues/461#issuecomment-457202572).
Could you create a specific issue in our tracker?

Related

Cannot resolve dependency in a custom ValidationAttribute, IServiceProvider is set to null on the ValidatationContontext parameter

An ASP.NET Core 2.1 MVC app, using Autofac following their documentation on setup (https://autofaccn.readthedocs.io/en/latest/integration/aspnetcore.html).
I am trying to resolve a dependency in a custom ValidationAttribute. The returned value from valicationContext.GetService is always returning null. Inspecting the validatationContext the private member serviceProvider is always null.
what am I missing in the setup that this isnt working. The dependancies resolve everywhere else in the app, just not in the ValidationAttributes.
public class MyCustomAttribute : ValidationAttribute
{
public MyCustomAttribute ()
{
}
protected override ValidationResult IsValid(object value, ValidationContext validationContext)
{
// THIS IS ALWAYS RETURNING NULL
var IMyService service = (IMyService)validationContext.GetService(typeof(IMyService));
return ValidationResult.Success;
}
}
I can't say I've tried this before, but doing some searching I found this issue which seems to indicate that the service provider (not the ServiceContainer property, but the service provider that will respond to GetService calls) should always be populated. Granted, that's from an archived repo from early on in .NET Core, but it should still hold.
Looking at the source for ValidationContext I see that the private serviceProvider field is actually a function that needs to be instantiated somewhere; it's not actually a reference to a provider proper. That means if it's null, one of two things is happening:
The path through ASP.NET Core that's instantiating that ValidationContext is not passing in the IServiceProvider required to provide services.
Something is broken.
If it's #1, I'd guess there are a variety of potential reasons. I'd think about things like...
The attribute is being used by something running outside the "ASP.NET Pipeline" - like a manually invoked validation or possibly something at application startup where there's no request at the moment.
The attribute is being used in a test where the full pipeline isn't in effect.
Something like that. I'm not saying this is what's happening, but I've seen questions like this before where it appears something isn't working right when it's actually an application code problem. For example, there are lots of questions about why "instance-per-request" dependencies aren't working and it turns out the code is running on a background thread where there's no request so... yeah. I don't know how your app works, but that sort of "I'm doing something I forgot to mention because I didn't think it was relevant" stuff comes into play here.
Let's assume you've got a super vanilla ASP.NET Core app, though. Based on the issue I mentioned earlier and the code you've posted, this looks like it should work but it's not. There shouldn't be anything you need to wire up for this, it should just work. Given that, you might want to file an issue about it. You may have found something that legitimately isn't working.
Before you do that, you might want to debug a little more. You can step right into ASP.NET Core source code and that could help you figure out what's up. The article I linked there explains how to set it up. It's not a two step process and needs screen shots to help or I'd put it right in here.
Set a breakpoint on your failing statement up there and then switch over to the Visual Studio "call stack" window. Click on the call stack frames higher up in the stack and see what's actually creating that context object. With a little clicking around and intuition you can probably figure out where the issue is. Maybe it'll point to something in your app you didn't realize you were doing, maybe it'll point to a bug in ASP.NET Core. If it's a bug in ASP.NET Core, having the information you found from your debugging session will be really helpful to that team.
Finally, I'd be remiss if I didn't mention that manually resolving a service in an attribute like this is technically service location rather than dependency injection and it'd be a better all-around solution if you avoided it entirely. There's an extremely similar question to yours right here walking through how to get around this with Simple Injector, though the principle holds for Autofac, too. Setting up model validators and using a model validator provider rather than attributes might be a better, more testable way to go. The answer in that question has more explanation on this.

testing ASP.net MVC 3 business/data logic

What is the best way to test data access layers and business logic in mvc 3 solutions?
I currently have a project where I am using repository classes to access databases, which in turn use hibernate. When I try to create a unit test for them in the auto generated unit tests, they always fail since the configuration for nhibernate is in web.config and and it doesnt try to look there. What am i doing wrong? This particular method returns this error
"The Web request 'http://localhost:35601/' completed
successfully without running the test"
The test methods look like this
[TestMethod()]
[HostType("ASP.NET")]
[AspNetDevelopmentServerHost("C:\\Users\\...", "/")]
[UrlToTest("http://localhost:35601/")]
public void GetByIdTest()
{
string someid= "..";
SomeObj actual = MyRepository.GetById(someid);
Assert.AreEqual(some, SomeObj.id);
}
How do i get this to work properly?
Putting the settings in the app.config should solve the issue you posed above however, the more correct answer is that you should be using a mocking framework to mock the nHibernate session.
The fact that you found an area that you would need to change to accomodate testing is great!!! That is one advantage of unit testing; you find coupling in your code that should be refactored.
I found another post that addresses what you are trying to do directly Mocking an NHibernate ISession with Moq. There are two answers in the post that offer to approaches which may be helpful.
I hope this helps. I havent used nHibernate so I can't speak authoritatively about it or that the link above will provide you with an answer, but each answer has ten upvotes so it looks like it was a solid post!

I am needing to change the table schema without reloading the app domain (EF Model Caching Issue)

I have a custom implementation of a multi tenant code first system basically SQL Schema Divisions of the tenants. I am using the ToTable method to map the schema correctly on the first call, but as I have read about the model being cached changing the schema on the second call do a different tenant does not work. Is there any ways in EF 4.1 to disable the caching or to rebuild the model every time.. Yes i know this is not great for performance. Thanks for any help..
Although it is an old question, but for all those who face this issue and end up finding this question for a possible solution. Here it goes...
Initially caching could be turned off by setting the "CacheForContextType" property of the ModelBuilder to ‘false’ in the OnModelCreating method. This method is defined in DBContext as virtual and needs to be overridden. But in EF 4.1 this property has been removed, since model creation is an expensive process and the Microsoft team wanted to promote a better pattern. Check this link
It seems like the Build() command on the ModelBuilder is what you're looking for.
modelBuilder.Build().Compile().CreateObjectContext...

WCF Services - splitting code into multiple classes

I'm currently looking at developing a WCF Service for the first time, but am a bit confused.
Say I've got a windows application that is going to call into this web service, that service is then going to call our data methods to retrieve and save the data.
Now, say for example we have 2 classes in our solution, Customer and Product. Do all methods within the service have to go into the same class file (e.g. MyService.svc), or can they be split into several classes replicating the main data layer, i.e. Customer.cs and Product.cs. If they can be split, how do these get called from within the windows forms application? Would each class be a different end point?
At the moment I can access the methods within the main class (e.g. MyService.svc), but I can't see any of the methods in the other classes, even though I have attributed them with "ServiceContract" and "OperationContract".
I have a feeling I'm missing something simple somewhere, just not sure what.
I would be grateful if some nice person could point me in the direction of a tutorial on doing this, as every tutorial I've found only includes the single class :)
Thanks in advance.
What you need to define is Data Contracts for your service
Theoretically, these data contracts could be your business entities (since 3.5 SP1 and its WCF poco support)
It's better though to create separate entities for your service and then to create conversion classes that can convert your business entities into service entities and the other way around
Actually, after loads of searching, I finally seemed to find what I was looking for just after posting my question (typical).
I've found the following page - http://www.scribd.com/doc/13136057/ChapterImplementing-a-WCF-Service-in-the-Real-World
Although I've not gone through it yet, it does look like it will cover what I'm after.
Apologies for wasting anyones time :) Hopefully this will be useful to someone else looking for the same thing.
It sounds like you only need one service. However, if you need to create multiple services. Consider this as an example.
[ServiceContract(Name = "Utility", Namespace = Constants.COMMON_SERVICE_NAMESPACE)]
public interface IService
[ServiceContract(Name="Documents", Namespace = Constants.DOCUMENTS_SERVICE_NAMESPACE)]
public interface IDocumentService
[ServiceContract(Name = "Lists", Namespace = Constants.LISTS_SERVICE_NAMESPACE)]
public interface IListService
Remember that you can create multiple data contracts inside a single service, and it is the best solution for a method that will require a reference to Customer(s) and Product(s).
It might help to take a look at MSDN's data contract example here.

How can I make my Selenium tests less brittle?

We use Selenium to test the UI layer of our ASP.NET application. Many of the test cases test longer flows that span several pages.
I've found that the tests are very brittle, broken not just by code changes that actually change the pages but also by innocuous refactorings such as renaming a control (since I need to pass the control's clientID to Selenium's Click method, etc) or replacing a gridview with a repeater. As a result I find myself "wasting" time updating string values in my test cases in order to fix broken tests.
Is there a way to write more maintainable Selenium tests? Or a better web UI testing tool?
Edited to add:
Generally the first draft is created by recording a test in the IDE. (This first step may be performed by QA staff.) Then I refactor the generated C# code (extract constants, extract methods for repeated code, maybe repeat the test case with different data, etc). But the general flow of code for each test case remains reasonably close to the originally generated code.
I've found PageObject pattern very helpful.
http://code.google.com/p/webdriver/wiki/PageObjects
more info:
- What's the Point of Selenium?
- Selenium Critique
maybe a good way to start is to incrementally refactor your test cases.
I use the same scenario you have selenium + c#
Here is how my code looks like:
A test method will look like somethink like this
[TestMethod]
public void RegisterSpecialist(UserInfo usrInfo, CompanyInfo companyInfo)
{
var RegistrationPage = new PublicRegistrationPage(selenium)
.FillUserInfo(usrInfo)
.ContinueSecondStep();
RegistrationPage.FillCompanyInfo(companyInfo).ContinueLastStep();
RegistrationPage.FillSecurityInformation(usrInfo).ContinueFinishLastStep();
Assert.IsTrue(RegistrationPage.VerifySpecialistRegistrationMessagePayPal());
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
paypal.LoginSandboxPage(usrInfo.sandboxaccount, usrInfo.sandboxpwd);
Assert.IsTrue(paypal.VerifyAmount(usrInfo));
paypal.SubmitPayment();
RegistrationPage.GetSpecialistInformation(usrInfo);
var bphome = new BPHomePage(selenium, string.Format(Resources.GlobalResources.LoginBPHomePage, usrInfo.AccountName, usrInfo.Password));
Assert.IsTrue(bphome.VerifyPageWasLoaded(usrInfo));
Assert.IsTrue(bphome.VerifySpecialistProfile());
bphome.Logout();
}
A page Object will be something like this
public class PublicRegistrationPage
{
public ISelenium selenium { get; set; }
#region Constructors
public PublicRegistrationPage(ISelenium sel)
{
selenium = sel;
selenium.Open(Resources.GlobalResources.PublicRegisterURL);
}
#endregion
#region Methods
public PublicRegistrationPage FillUserInfo(UserInfo usr)
{
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserFirstName", usr.FirstName);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserLastName", usr.LastName);
selenium.Select("ctl00_cphComponent_ctlContent_wizRegister_ddlUserCountry", string.Format("label={0}",usr.Country ));
selenium.WaitForPageToLoad(Resources.GlobalResources.TimeOut);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserEmail", usr.Email );
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserDirectTel", usr.DirectTel);
selenium.Type("ctl00_cphComponent_ctlContent_wizRegister_tUserMobile", usr.Mobile);
return this;
}
}
Hope this helps.
How are you creating your Selenium tests, by recording them and playing them back? What we have done is build an object model around pages so that you call a method like "clickSubmit()" rather than clicking on an id (with a naming convention for these ids), which allows selenium tests to survive many changes.
You may or may not be able to write tests that are resilient to refactoring. Here's how to make the refactoring less painful: Continuous integration is essential.
Run them every day or every build. The sooner it's fixed, the easier.
Ensure devs can run the tests themselves. Again, the sooner it's seen and fixed, the easier.
Keep selenium tests few. They should focus on critical path / pri 1 test scenarios. Deep testing should be done at unit test level (or jsunit tests). Integration tests are always expensive and less valuable.
Hooking up on any low-level concepts like XPaths, CSS Selectors or IDs for end-to-end tests is a recipe for unstable tests.
I advise using testRigor to produce tests that won't break any time you run/change/improve your application a little bit.
The code analogous to the page object one above would look like this:
enter "Peter" into "First Name"
enter "Pen" into "Last Name"
enter "US" into "Country" below "User Data"
enter stored value "email" into "Email"
enter stored value "password" into "Password"
enter "415-123-4567" into "Direct Telephone"
enter "415-123-4568" into "Mobile Number"
click "Submit"
testRigor would associate texts that look like labels with the inputs so that as soon as from an end-user's perspective your page would look the same then the testRigor scripts will be green. Here is the doc.
disclaimer: I'm a co-founder of testRigor. I co-founded it because we had those exact issues ourselves.
Hope this helps.
There are no innocuous changes when it comes to test automation ;)
We use the SAFS framework with Rational Robot (RRAFS) to minimize impact to our automation scripts. There's still work to maintain the application map, but the scripts remain stable for the most part. The SAFS framework sounds very similar to the method cynicalman mentions, but already packages up the generic methods you would use in your scripts.
The SAFS site says there's partial support for Selenium, so this may work for you.
I've found that using XPath expressions in Selenuium-RC adds alot to the robustness of a test.
I write my tests in a similar manner. The first pass is often written via the IDE/Record to get most of my page-flow and click operations. Once I've got that, I begin stepping through the test via Selenium-RC adding assertions and changing absolute widget locators to more readable and friendly Xpath expressions. (as well as documenting the test! :) )
One thing to be aware of.. if your tests are xpath-heavy, they may run a little slower in IE6 due to its poor javascript execution abilities. (I have some test suites that take almost an hour longer to execute under IE than under FF. It's managable, but just something to keep in mind when you're writing the tests.)
Selenium in theory has an abstraction called UI Element (the documentation is here).
The features would be
abstract locators, indipendent on the very html implementation; this would map well to the concept of component or widget of a web framework,
rollup rules, allowing to merge several commands into a single more abstract command.
I've struggled a couple of days to leverage this feature but in the end I decided to abandon it, for the following reasons:
some concepts, such as that of offset locators (think of them as parts of a component) are not fully or usefully developed;
the feature is not fully supported in formatters and the more recent the formatter the less the feature is supported, hinting that the core Selenium evolution is leaving this feature behind;
it's not fully integrated in Selenium 2.0 (WebDriver).
I think Xpath is the best way to ensure robust selenium tests.
I am currently working on a library to help writing xpath expressions easier.
If interested, you can check it out here:
http://www.unit-testing.net/CurrentArticle/How-To-Write-XPath-for-Selenium-Tests.html

Resources