What would be the fastest method: script setup or setup function - vuejs3

In Vue3, which of the two methods would be the fastest to run:
<script setup></script> <!-- 1 -->
<script>setup() return {}</script> <!-- 2 -->

Both will probably achieve exactly the same result once sent to the compiler since the first one is sugar syntax.
No real performance gains here, just a matter of DX preference.
And even if there is a 0.03 ms gain, I would start by optimizing your images/fonts/3rd party analytics injected by GTM, first.

Related

Implementing a cold source with Mutiny

I'd like to know, what is a proper way to implement my own cold source (publisher) using the Mutiny library.
Let's say there is huge file parser that should return lines as Multi<String> items according to the Subscriber's consumption rate.
New lines should be read only after previous were processed to optimize memory usage, while buffering a couple of hundred items to eliminate consumer idling.
I know about the Multi.createFrom.emitter() factory method, but using it I can't see a convenient way to implement the backpressure.
Does Mutiny have a idiomatic way to create cold sources that produce next items only after requested by the downstream, or in this case I supposed to implement my own Publisher using the Java Reactive Streams API and then wrap it in Multi?
You can use Multi.createFrom().generator(...).
The function is called for every request. But you can pass a "state" to remember where you are, typically an Iterator.
This is the opposite of the emitter approach (which does not check for requests but has a backpressure strategy attached to it).
If you need more fine-grain back-pressure support, you would need to implement a Publisher.

Speed up 'if' conditions (verify / waiton)

(Disclaimer: I'm quite new to Tosca but not to testing in general)
DESCRIPTION
I'm automating a very dynamic page in Tosca, with content that is added (or not) as you progress through a form.
I do not have a testsheet at hand and do not have time to create one so I cannot use a template and the 'conditions' (I'm using TC-Parameters and they do not seem to apply to the 'Condition' column).
I wanted to use libraries as much as possible because most of the steps are the same, and there are a LOT of possible outcomes (I have more than 100 TCs to automate) so I'm trying to have my steps as 'generic' as possible, so that if the interface is changed in the future I'll be ale to maintain most of it 'centrally'.
PROBLEM
I've added four 'ifs' in strategic points. The problem is that an unsuccessful 'if' seems to hang for 10s, no matter what I use inside: 'verify' takes 10s & 'waiton' also takes 10s (although for the latter I modified the settings to 5s so I don't understand why).
I'm actually not interested in 'verify' waiting at all. I know that the content has either to be there or not at the precise moment where I have my condition. I'd be happy with a 1s delay, that'd be more than enough time for the app to display the content.
The TCs duration varies between 1m and 1m40s (4*10s if my 4 'if' are negative). It'd be great if I could speed it up, especially because most of the 'ifs' will NOT trigger. Any ideas?
You could try checking some settings regarding how long Tosca waits for Verify/Waiton:
Settings > Engine > Waiton > Maximum Wait Duration
Settings > TBox > Synchronization > Synchronization Time out
However, I've also found using buffered values to be more efficient time-wise for some of my testing scenarios.
I solved it by adding TCPs and buffering them, in order to check them as conditions (not sure why Tosca requires the intermediate buffering step but that does the trick). Now I can configure whether my tests should expect a given follow-up item or not. It's a lot of extra configs but at least my tests are nicely modular.

Awesome WM - os.execute vs afwul.spawn

I was wondering if there are any difference between lua's
os.execute('<command>')
and awesome's
awful.spawn('<command>')
I noticed that lain suggests to use os.execute, is there any reason for that, or it's a matter of personal taste?
Do not ever use os.execute or io.popen. They are blocking functions and cause very bad performance and noticeable increase in input (mouse/keyboard) and client repaint latency. Using these functions is an anti-pattern and will only make everything worst.
See the warning in the documentation https://awesomewm.org/apidoc/libraries/awful.spawn.html
Now to really answer the question, you have to understand that Awesome is single threaded. It means it only does a single thing at a time. If you call os.execute("sleep 10"), your mouse will keep moving but your computer will otherwise be frozen for 10 seconds. For commands that execute fast enough, it may be possible to miss this, but keep in mind that due to the latency/frequency rules, any command that takes 33 millisecond to execute will drop up to 2 frames in a 60fps video game (and will at least drop one). If you have many commands per second, they add up and wreck your system performance.
But Awesome isn't doomed to be slow. It may not have multiple threads, but it has something called coroutine and also have callbacks. This means things that take time to execute can still do so in the background (using C threads or external process + sockets). When they are done, they can notify the Awesome thread. This works perfectly and avoid blocking Awesome.
Now this brings us to the next part. If I do:
-- Do **NOT** do this.
os.execute("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
The label will display foo. But If I do:
-- Assumes /tmp/foo.txt does not exist
awful.spawn.with_shell("sleep 1; echo foo > /tmp/foo.txt")
mylabel.text = io.popen("cat /tmp/foo.txt"):read("*all*")
Then the label will be empty. awful.spawn and awful.spawn.with_shell will not block and thus the io.popen will be execute wayyy before sleep 1 finishes. This is why we have async functions to execute shell commands. There is many variants with difference characteristics and complexity. awful.spawn.easy_async is the most common as it is good enough for the general "I want to execute a command and do something with the output when it finishes".
awful.spawn.easy_async_with_shell("sleep 1; echo foo > /tmp/foo.txt", function()
awful.spawn.easy_async_with_shell("cat /tmp/foo.txt", function(out)
mylabel.text = out
end)
end)
In this variant, Awesome will not block. Again, like other spawn, you cannot add code outside of the callback function to use the result of the command. The code will be executed before the command is finished so the result wont yet be available.
There is also something called coroutine that remove the need for callbacks, but it is currently hard to use within Awesome and would also be confusing to explain.

Do I always have to "wait" for page loads when using selenium on non-ajax pages?

I'm writing some BDD tests using Cucumber, Selenium and Xunit for a legacy ASP.Net application. The way the pages are designed, every "click" leads to a new page being fetched from the server. If I have to automate the tests for a particular page, should I have a line similar to the following after every "click"?
WebDriverWait wait = new WebDriverWait(driver, TimeSpan.FromSeconds(timeout));
wait.Until(...); //Wait until something about the page is true
I'm not sure if Selenium would wait implicitly for page loads without my explicitly having to state this all the time. What is the recommended pattern to handle this scenario?
It's cumbersome to always have an idea of "some element" so that I can put it in the Until method and that leads to brittle tests. The ASP.Net pages are littered with lots of dynamic controls and a whole slew of page refreshes which makes the test code quite unreadable.
My Proposed Solution: Write an extension method that does the waiting implicitly and that takes a parameter of an element-id to wait on. But I'm just refactoring the above problem into a more manageable place. I still have a wait be explicitly performed. Is there no way to eliminate it? Does selenium have some obvious default that would handle this case without the need for such an extension method or is this really a natural way of doing it?
If you want your tests be reliable and wait only the exactly needed time interval - then yes, Explicit Waits with WebDriverWait is a perfect solution. And, it's actually a very "natural" solution - think about how you, as a user, define that the page loaded - it's usually when you see the desired content, correct? When you look at the loading page, you are constantly reevaluating the state of the page, checking whether the desired content appeared or not. Explicit Waits follow the same logic - by default, every 500 ms it checks if the expected condition is true or not, but no more than X seconds you've configured when instantiating the
WebDriverWait.
If you need to use wait.until() calls often and want to follow the DRY principle, think about applying the "Extracting Method" or other refactoring methods.
You can set the implicit wait which would be applied on every element search, or introduce hardcoded "artificial" delays, but that's not going to be reliable and would be time-wasteful - you'll end up waiting more than needed and having occasional test failures.

How to write integration test for systems that interact asynchronously

Assume that i have function called PlaceOrder, which when called inserts the order details into local DB and puts a message(order details) into a TIBCO EMS Queue.
Once message received, a TIBCO BW will then invoke some other system(say ExternalSystem) to pass on the order details.
Now the way i wrote my integration tests is
Call the Place Order
Sleep, and check details exists in local DB
Sleep and check details exists in ExternalSystem.
Is the above approach correct? Above test gives me confidence that, End to End integration is working, but are there any better way to test above scenario?
The problem you describe is quite common, and your approach is a very typical solution.
The problem with this solution is that if the delay is too short, your tests may sometimes pass and sometimes fail, but if the delay is very long, then your just wasteing time waiting, and with many tests, it can add a lot of delay. But unless you can get some signal to tell you the order arrived in the database, then you just have to wait.
You can reduce the delay by doing lots of checks with short intervals. If you're order is not there after timeout, then you would fail the test.
In "Growing Object-Oriented Software, Guided by Tests"*, there is a chapter on this very subject, so you might want to get a copy if you will be doing a lot of this sort of testing.
"There are two ways a test can observe the system: by sampling its observable
state or by listening for events that it sends out. Of these, sampling is
often the only option because many systems don’t send any monitoring
events. It’s quite common for a test to include both techniques to interact
with different “ends” of its system"
(*) http://my.safaribooksonline.com/book/software-engineering-and-development/software-testing/9780321574442

Resources