Using Karate testcase scenario (feature) I need to interact with to simulate signature to be written in it.
<canvas width="1292" height="596" style="width: 100%; height: 100%; touch-action: none;"></canvas>
I have tried to use JS to draw some element, but it not identified as signature. I have used this:
* script('document.getElementsByTagName("canvas")[0].getContext("2d").fillStyle = "green"')
* script('document.getElementsByTagName("canvas")[0].getContext("2d").fillRect(10, 10, 150, 100)')
It worked, but the paint was not identified as signature. I have different idea - to use path/move to/fill in javascript:
* script('document.getElementsByTagName("canvas")[0].getContext("2d").fillStyle = "green"')
* script('document.getElementsByTagName("canvas")[0].getContext("2d").beginPath()')
* script('document.getElementsByTagName("canvas")[0].getContext("2d").moveTo(200, 200)')
* script('document.getElementsByTagName("canvas")[0].getContext("2d").fill()')
Using this, the test has passed (these lines passed) but in reality the line has not been written into the canvas...
Do you have any idea how to simulate this? Or is there another way how to do it differently? My aim is to simulate hand-written signature in .
Thank you!
My sincere opinion is to drop this test and focus on the other parts of the application that you can test. It may not be worth it. Also consider asking the developer team to disable this part of the UI flow in test-mode, this is a perfectly legitimate test-strategy, many teams do this for CAPTCHA-s for example.
It may be possible if you use the mouse() API which supports click and drag to some extent: https://github.com/intuit/karate/tree/master/karate-core#mouse
But not an area you will get much help on the internet.
Related
What I want is that on a-frame, when I talk, my 3D model avatar is also being made talking.
Following this guide, https://aframe.io/docs/1.1.0/introduction/models.html#animating-models, I created 3D model avatar with this resource, https://sketchfab.com/3d-models/bake-talking3-e715ab67be934a108d0a952d90c07210
But this gltf 3D model is talking all the time. I need interactive 3D model talking whenever I talk.
Let's assume my voice detection is already implemented.
Can anyone answer this, please?
The animation-mixer component has two methods that should help
playAction() which will play the
stopAction() which will stop the
Let's assume my voice detection is already implemented.
Then Your code could look like this:
const modelEntity = document.querySelector("[gltf-model]")
const animationComponent = modelEntity.components["animation-mixer"]
mySpeechRecognition.onspeechstart = function() {
animationComponent.playAction();
}
mySpeechRecognition.onspeechend= function() {
animationComponent.stopAction();
}
Something like what I did in this glitch. Green plays, red stops. Click on the fish to check out the source.
I can find no explanation on what these are actually supposed to do.
There are examples on:
https://jxbrowser.support.teamdev.com/support/solutions/articles/9000102480-forwarding-mouse-events
but 629 and 373 ? I can't figure out what those values are for. I can get the same behaviour with any value for those.
What if you one also sets windowX and windowY ?
What would they do to the result click?
I am looking to be able to click and move on a google map. Is that possible?
The x and y values define the mouse event coordinate inside the browser content area.
The globalX and globalY values define the screen coordinate of the mouse event.
The windowX and windowY are deprecated. If you set them, this doesn't affect anything.
For more detailed information about working with Google Maps, please take a look at the article.
Your browserView must be active/focused first!
Then, let's do this way:
public static void simulateMouseClickOnElement(Browser browser, BrowserView browserView, DOMElement element){
Rectangle rect = element.getBoundingClientRect();
Point ptOnScreen = new Point(rect.x , rect.y );
SwingUtilities.convertPointToScreen(ptOnScreen, browserView);
forwardMouseClickEvent(browser,MouseButtonType.PRIMARY,rect.x,rect.y, ptOnScreen.x, ptOnScreen.y);
}
JxBrowser is the BEST automation or crawling/hacking tool! Selenium is a best alternative choice due to the TCO.
I'm trying to start an application (Spotify) on a particular tag. Rules aren't applied and now I'm inspecting the client class by printing the class in a notification in the "manage" signal. This results in an empty notification.
client.connect_signal("manage", function (c, startup)
naughty.notify({title=c.class})
end)
When I restart awesome, it does print the client class, so why isn't it working when the client is initially started?
Using xprop, it also prints the class:
WM_CLASS(STRING) = "spotify", "Spotify"
Sounds like a bug in Spotify (and I think I heard about this one before). I would guess that Spotify does not follow ICCCM and only sets its WM_CLASS property after it made its window visible and not before.
I fear that you cannot do much about this except for complaining to Spotify devs to fix their stuff.
You could work around this by starting a timer in the manage signal that checks if a window turns out to be spotify a short time later. Alternatively, you could do something like client.connect_signal("property::class", function(c) if c.class == "Spotify" then print("This is now a spotify window") end end) to react to change to a window's class (of course you'd want to do something more useful to Spotify's windows than printing them).
(Per the ICCCM, a window is not allowed to change its class while it is visible, but who cares about standards...)
I had a similar issue with the claws-mail client. Inspecting it via xprop, it shows
WM_CLASS(STRING) = "claws-mail", "Claws-mail"
but awesome just did’t apply the rules for it. The trick was giving awesome-wm both of these class names in the rules section by providing a set of characters to chose from:
rule = {class = "[Cc]laws%-mail"}
I hope this works for your spotify application, too.
For further reading about patterns in lua I suggest this:
https://www.lua.org/pil/20.2.html
THE GOAL I WANT TO ACHEIVE:
Control AE timelines using ONE EXPRESSION LAYER (much like using Actionscript) to trigger frequently used comps such as blinking, walking, flying etc... for cartoon animation.
I want animate a the blinking of a cartoon character. (and other actions, explained below) Rather than "re posting" the comp or key frames movements every time I want a blink or a particular action, I want to create a script where I can trigger the Blink comp to play. Is this possible? (Sidenote: A random blink through entire movie would be nice) but I still want to know how to do this for the reasons below.
Ideally: I would like to create an "Expressions layer" in the main comp to TRIGGER other comps to play. At certain points I would like to add triggers to call frequently used comps that contain actions like.. Blinking, Walking, Flying, Look Left and Right etc...
IT WOULD BE AMAZING IF somehow we could trigger other comps to begin, repeat, stop, maybe reverse, and do this all from one Main Comp using an expression layer.
WHY DO IT THIS WAY?
Why not just paste a comp in the spot you want it to play every time you want such action? Well in after effects if you wanted a "blink comp" to play 40 times in two minutes you would have to create 40 layers, or pate the key frames on that comp 40 times. Wouldn't it be awesome to trigger or call it from one one layer when you wanted it from one expressions layer?
We do something like this in Flash using Actionscript all the time. It would be awesome if there was a method out there to achieve this effect. This would be an OUTSTANDING tutorial and I believe it would be very popular if someone did it. It could be used for a MULTITUDE of amazing effects and could save a ton of time for everyone. Heck, help me figure this out and perhaps I will make a tutorial.
Thank you all ye "overflowing Stackers" who contribute! :)
I found the answer and that is...
IT'S NOT POSSIBLE.
After Effects expressions can not control other timelines. Unfortunately you have to put an expression on each layer you want to affect.
The next best solution, and to achieve something close to what I was asking can be found on this link: motionscript.com/design-guide/marker-sync.html
We can only hope that Adobe will someday give the power to expressions like they did with action-script.
HOPEFULLY SOON! Anyone reading this who works for Adobe please plead our case. Thanks
Part 1: Reference other layers in pre-Comps
Simply replace "thisComp" with "comp("ComName")"
To reference Effect-Controllers between compositions, follow the below formula:
comp("ComName").layer("LayerWithExpression").effect("EffectControlerName")("EffectControllerType")
More In-depth Answer: Adobe's Docs - Skip to the Layer Sub-objects part
As I understand the Adobe documentation, only Layers can be accessed,
not footage. What this means is that you will need to create your
expression link utilizing a pre-Comp. Footage can not access this so
that also means no nulls, adjustment layers, etc.
As an added bonus, if you use the essential graphics panel, you can put all the controllers in one pre-comp, but have the controls available no matter which comp you are in. Just select it in the Essential-Graphics dropdown.
Part 2: Start/End based on other layers within pre-comps:
Regarding the next part where you want the expressions to activate based on other compositions, I recommend using the in-out Point expression.
inPoint | Return type: Number. Returns the In point of the layer, in seconds.
outPoint | Return type: Number. Returns the Out point of the layer, in seconds.
If you utilize the start time overrides you can pull this from:
startTime | Return type: Number. Returns the start time of the layer, in seconds.
Alternate Option:
I would recommend avoiding this as the keyframes are basically referenced as an index, so things can get messed up if you add one ahead of a keyframe you were already using - def incorporate some error handling.
Refer to the Key attributes and methods (expression reference) Here
Part 3: Interpolation & Time Reversal
You can time reverse the layer in the rightclick->time, otherwise this is all interpolation expressions like loop out etc - you can loopOut("FOO") a pre-comp if you not only cut it correctly, but also enable time remapping.
then use this to loop those keyframes;
try{ timeStart = thisProperty.key(1).time; duration = thisProperty.key(thisProperty.numKeys).time-timeStart; pingPong =
false; //change to true value if you want to loop animationn back &
forth quant=Math.floor((time-timeStart)/duration);
if(quant<0) quant = 0
if(quant%2 == 1 && pingPong == true){
t = 2*timeStart+ (quant+1)*duration - time;
}
else{
t = time-quant*duration;
}
}
catch(err){
t = time;
}
thisProperty.valueAtTime(t)
I want to display LaTeX math in the gmail messages that I receive, so that for example $\mathbb P^2$ would show as a nice formula. Now, there are several Javascripts available (for example, this one, or MathJax which would do the job, I just need to call them at the right time to manipulate the gmail message.
I know that this is possible to do in "basic HTML" and "print" views. Is it possible to do in the standard Gmail view? I tried to insert a call to the javascript right before the "canvas_frame" iframe, but that did not work.
My suspicion is that manipulating a Gmail message by any Javascript would be a major security flaw (think of all the malicious links one could insert) and that Google does everything to prevent this. And so the answer to my question is probably 'no'. Am I right in this?
Of course, it would be very easy for Google to implement viewing of LaTeX and MathML math simply by using MathJax on their servers. I made the corresponding Gmail Lab request, but no answer, and no interest from Google apparently.
So, again: is this possible to do without Google's cooperation, on the client side?
I think one of the better ways to do this might be to embed images using the Google Charts API.
<img src="http://chart.apis.google.com/chart?cht=tx&chl=x=\frac{-b%20\pm%20\sqrt{b^2-4ac}}{2a}">
To Learn more: https://developers.google.com/chart/image/ [note, the API has been officially deprecated, but will work until April 2015]
If you really must use LaTeX and some js library, I think one way you could accomplish this is by injecting a script tag into the iframe.
I hope this is a good starting point.
Example:
// ==UserScript==
// #name Test Gmail Alterations
// #version 1
// #author Justen
// #description Test Alter Email
// #include https://mail.google.com/mail/*
// #include http://mail.google.com/mail/*
// #license GPL version 3 or any later version; http://www.gnu.org/copyleft/gpl.html
// ==/UserScript==
(function GmailIframeInject() {
GM_log('Starting GMail iFrame Injection');
var GmailCode = function() {
// Your code here;
// The ':pd' (div id) changes, so you might have to do some extra work
var mail = document.getElementById(':pd');
mail.innerHTML = '<h1>Hello, World!</h1>';
};
var iframe = document.getElementById('canvas_frame');
var doc = null;
if( iframe ) {
GM_log('Got iFrame');
doc = iframe.contentDocument;
} else {
GM_log('ERROR: Could not get iframe with id canvas_frame');
return
}
if( doc ) {
GM_log('Injecting GmailCode');
var code = "(" + GmailCode + ")();"
doc.body.appendChild(doc.createElement('script')).innerHTML=code;
} else {
GM_log('ERROR: Could not get iframe content document');
return;
}
})();
Well, there are already greasemonkey scripts that do things to GMail as far as i know (like this one). Is this a possible security hole? Of course, anything you'd do with executable code has that risk. Google seems to move a glacial speeds on things they're not interested in. They really do seem to function based on internal championing of ideas, so best way forward is to go find sympathetic googlers, if you want them to include something into GMail. Otherwise stick to Greasemonkey, at least you'll have an easy install path for other people who'd like to see the same functionality.