Is there a way to fix the system clock at a time when running tests with Robolectric? I have some code that's dependent on the day of the week. In rspec, there's the TimeCop gem that allows you to fix a time for the system. Does Robolectric have an equivalent?
I didn't saw a ready to use feature for set explicit time/date yet.
You can achieve this by creating your own ShadowSystemClock implementation. Or shadow another class/function from where the date is being collected.
Current SystemClock shadow implementation
https://github.com/robolectric/robolectric/blob/master/src/main/java/org/robolectric/shadows/ShadowSystemClock.java
Implement own shadow
http://robolectric.org/extending/
Related
I found many languages provides some way to change code runtime. Many people ask queries regarding how to change code in this or that language runtime. Here I mean by change code is that rewrite code itself at runtime by using reflection or something else.
I have around 6 year of experience in Java application development. I never come again any problem where I have to change code at time.
Can anyone explain why we require to change code at runtime?
I have experienced three huge benefits of changing code at runtime:
Fixing bugs in a production environment without shutting down the application server. This allowed us to fix bugs on just some part of the application without interrupting the whole system.
Possibility of changing the business rule without having to deploy a new version of the application. A quicker deploy of features.
Writing unit test is easier. For example, you can mock dependencies, add some desired behaviour to some objects and etc. Spock Framework does well this.
Of course, we had this benefits because we have a very well defined development process on how to proceed on this situations.
At times you may need to call a method based on the input, that was received earlier in the program.
It could be used for dynamic calculation of value based on the key index, where every key is calculated in a different way or calculation requires fetching required data from different sources. Instead of using switch statement you can invoke a method dynamically using methodName+indexOfTheKey.
It seems that Xunit no longer supports extending TraitAttributes. They have sealed the class.
There are also some other issues with Autofixture's plugin for AutoData() where we can inject random created data through an attribute. There are a few work around's for this, however I am attempting to evaluate this for a larger product overall. I liked the demo's since they could do small things like SQL, Excel, custom Attributes for category.
It seems there was more functionality before the changes. I have looked at the site and still see some of the features are returning and there isn't much information.
Is there a new set of functionality coming out? Or possibly a change that will allow us to recreate the older functionality in a new way? It seems the SQL and Excel have a work around, however I can't find any information about when the latest version will be compatible with "Autofixture with xUnit.net data theories" Nuget package. I really like what I have seen, though I can say I don't like breaking changes when I look at enterprise solutions. I cringe a little when I think about if I had this in place in an enterprise and I had made a lot of custom attributes, or used Moq and Autofixture to populate and now all my tests were broken. So I guess the other question is, does Xunit seem to change a lot with breaking changes? There is the other option of moving Xunit back a version. Though at some point I would need to know if these things would be fixed or if they were permanently removed, since I wouldn't want to spend time using functionality that is being removed.
Another is AutoFixtureMoqAutoDataAttribute that doesn't load without that side Nuget package. With the side nuget packages not being updated.
I guess the end question may be.. Does anyone know of any plans to get these features to work with the current version of xunit so that I can start implementing and then expect to do mass replaces later? Or are these permanently breaking changes where we shouldn't implement anything that is currently missing.
Thank you in advance.
Short answer
If you want to use xUnit.net 1.x with AutoFixture, use AutoFixture.Xunit.
If you want to use xUnit.net 2.x with AutoFixture, use AutoFixture.Xunit2.
Explanation
xUnit.net 2.0 introduced breaking changes, compared to xUnit.net 1.x (e.g. 1.9.2). For AutoFixture, we wanted to make sure that AutoFixture supports both. There are people who want to upgrade to xUnit.net 2.x as soon as possible, but there are also people who, for various reasons, will need to stay with xUnit.net 1.x for a while longer.
For the people who wanted or needed to stay with xUnit.net 1.x for the time being, we wanted to make sure that they'd still get all the benefits of various bug fixes and new features for the AutoFixture core, so we're maintaining two parallel (but feature complete) Glue Libraries for AutoFixture and xUnit.net.
As an example, we've just released AutoFixture 3.30.3, which addresses a defect in AutoFixture itself. This bug fix thus becomes available for both xUnit.net 1.x and 2.x users.
Thus, when you need to migrate from xUnit.net 1.x to xUnit.net 2.x, you should uninstall AutoFixture.Xunit and instead install AutoFixture.Xunit2. As far as I know, there should be feature parity between the two.
Traits
AutoFixture.Xunit and AutoFixture.Xunit2 don't use the [Trait] attribute, so I don't know exactly what you have in mind regarding this.
AutoMoq
Again, when it comes to AutoFixture.AutoMoq, it doesn't depend on xUnit.net, so I don't understand the question here as well. It sounds like a separate concern, so you may want to consider asking a separate question.
Is there a way without using multiple threads?
I found this https://stackoverflow.com/a/17329626/4014896
But i don't get how it works. Shouldn't it cause 100% CPU usage in the example?
and how can I embed it, for example, into QT?
there is also this: https://github.com/svalaskevicius/qt-event-dispatcher-libuv
But there is no documentation at all.
But from my looks it seems to be something that translates from example QSocket to uv_tcp_socket which is not what I'm searching for.
In short - you'll either need to merge the two event loops or use separate threads and sync the event handlers manually..
The first link you pasted shows how to process libuv events that have happened since the last invocation. The while stated there will use ~100% CPU if there are no events dispatched (as it will just keep polling).
The second link (qt-event-dispatcher-libuv) is a project I've created to tackle the same problem. It does, however, work as you described - by using libuv to handle Qt's event loop (and by doing that - merges the two event loops into one).
To use it you'd just need to register the event dispatcher in your application using http://qt-project.org/doc/qt-5/qcoreapplication.html#setEventDispatcher. An example where this library is used - https://github.com/svalaskevicius/qtjs-generator/blob/master/src/runner/main.cpp#L179
There is still one catch using this approach - while it works very well on linux, there are some problems on OS X as the Qt's Cocoa platform support plugin handles some Cocoa's event loop operations and doesn't provide a nice API to merge it as well (currently its updating them one its freed up after a small timeout so there is some (barely?) noticeable lag to handle the GUI events) - I was planning to port the platform support plugin to be able to integrate it as well but that's still in future. And I haven't tested it on windows yet :)
An alternative solution could probably be to try to merge the two loops from another direction that I've done - instead of making Qt to use libuv, libuv's api could be provided that uses Qt's handlers - although it requires a considerable amount of work as well.
Please let me know if there is any more info I could provide.
Regards,
I'm trying to use to SketchUp API to navigate around 3D models (zoom, pan, rotate, etc). My ultimate aim is to integrate it with a Leap Motion app.
However, right now I think my first step would be to figure out how to control the basic navigation gestures via the Sketchup API. After a bit of research, I see that there are the 'Camera' and the 'Animation' interfaces, but I think they would be more suited to 'hardcoded' paths and motions within the script.
Therefore I was wondering - does anyone know how I can write a plugin that is able to accept inputs from another program (my eventual Leap Motion App in this case) and translate it into specific navigation commands using the Sketchup API (like pan, zoom, etc). Can this be done using the 'Camera' and the 'Animation' interfaces (in some sort of step increments), or are there other interfaces I should be looking at.
As usual, and examples would be most helpful.
Thanks!
View, Camera and the Animation class is what you are looking for. Maybe you don't even need the Animation class, you might just be ok with using the time in the UI class. Depends on the details of what you will be doing.
You can set the camera directly like so:
Sketchup.active_model.active_view.camera.set(ORIGIN, Z_AXIS, Y_AXIS)
or you can use View.camera= which also accept a transition time argument if you find that useful.
For bridging input you could always create a Ruby C Extension that takes care of the communication between the applications. There are some quirks in getting C Extensions work for SketchUp Ruby as oppose to regular Ruby though - depending on how you compile it.
I wrote a hello world example a couple of years ago: https://bitbucket.org/thomthom/sketchup-ruby-c-extension
Though note that I have since found a better solution for Windows, using the Development Kit from Ruby Installers: http://rubyinstaller.org/
This answer is related to my comment above regarding the view seemingly 'jumping' when I assign a new camera to the current view using camera=, but not if I use camera.set.
I figured out this was happening because the camera FOV for the original camera was different, and the new camera was defaulting to an FOV of 30. By explicitly creating the camera with the optional perspective and FOV arguments from the initial camera solves this problem:
new_camera = Sketchup::Camera.new new_eye, new_target, curr_camera.up, curr_camera.perspective?, curr_camera.fov
Hope people find this useful!
I've been working on a Flex component and I'd like to write some automated tests for it. The trouble is, the UI testing tools I've looked at (FlexMonkey and Selenium Flex API) don't simulate "enough":
Most of the bugs which have come up so far relate to the way Flex deals with dragging and dropping, which these libraries can't simulate accurately enough. For example, I need to test a case where there is a "drop" event which occurs in the bottom half of a component – neither FlexMonkey nor Selenium Flex API can do that (they may simulate a mouse event, but they won't include coordinates).
So, is there any "good" way to automate that sort of testing?
Edit: After much research, it looks like the only piece of software that can do this is iMacros, which is Windows-only and the interface is... Lacking. So I'm going to be writing my own. Basically, it will put an HTTP interface on java.awt.Robot so code (in any language) can simulate mouse/keyboard events. If you're interested, PM me and I'll keep you updated.
Edit 2: I have published the first version of the framework I wrote, Blunderbuss, over at BitBucket: http://bitbucket.org/wolever/blunderbuss/ . You'll need Jython to run it (http://www.jython.org/), but after that the flex-client example should work.
Videos of Blunderbuss live over at Vimeo:
Automating Flex testing with Blunderbuss
Blunderbuss test suite running
At the moment this remains a proof-of-concept, as I haven't had the cycles to clean it up and make it more useable… But maybe enough people bothering me would give me that time :)
I've used Eggplant to test Flash and AIR apps without having to add any hooks into the code. It's a great tool but it's quite expensive. It simulates a real user by VNC-ing into a system and uses image recognition - among other things - to interact with the app.
I am definitely interested in your custom Java class, and (though I am not the best at Java (yet...)), I would be willing to help out if you're thinking of making this collaborative.
As to Flash MouseEvents. Unfortunately, there really isn't an accurate way to simulate the drag/drop experience in Flash. MouseEvents, when generated by the mouse, are handled in a very different way than regular events and while you could simulate actions by passing events into the handling functions, or by making the dispatcher fire a new DragEvent( DragEvent.DRAG_DROP..., it will not be the same as having the user interact with it. And for some functionality (like gaining access to the clipboard), nothing inside Flash will accomplish your goals.
To be honest, you're probably headed in the right direction -- using something which is not written in Flash to drive faked mouse events is probably your best bet.
I've never had to use it in Flex but i recently stumbled across some info on automation packages in the MS Surface SDK... after looking into it those classes automated user behavior which can be used for testing i.e. move a fake mouse to this point, perform this action. As you're using Flex mx.automation packages and classes. My guess (and hope) is that you'd be able to achieve what you want using these classes.
You could also try auto-hotkey - it is similarly a macro-editing program but it has proven to be very efficient and you can write scripts and set it up very easily.