I'm making a robot using an arduino motherboard and an Ethernet shield. The Ethernet shield is setup so that if I type http://www.robotip.com/?1 it gives power to pin 1, if I type http://www.robotip.com/?2 it gives power to pin 2. So I have a button and rather then a normal button clicking it sends 1 command I would like to be able to hold the button and have it send ?1 every second that the button is held. What can i use to achieve this? I know this is a very confusing question lol thanks a lot.
If you're making a "control interface" with a webpage,
you should use javascript (Jquery is a nice javascript library)
Using get function you can send the commands asynchronous so it won't refresh the page every time you want to send some data to the robot.
And it also have a mousedown method, that is just what you want, it will do something while the mouse button is .. pressed.
Then you can put your timed requests inside the mousedown method.
Related
I have tried for several days to get my RPi button press accessible through Dataplicity on my phone. I know it is a custom action with on/off, but I am unable to get the interface with on/off to control my button press. The example is for a red and green light on/off. I essentially have a cluster of lights hooked up through a relay to pin 17. Pin 18 controls the button and pin 17 works the relay. One press of the button turns the Christmas lights on for the allotted amount of time I specified. Any suggestions? The link to the example code is:
https://docs.dataplicity.com/docs/custom-actions-gpio
Just not sure how to merge the two. This is my first major project and I have no background in programming.
the button press code is as follows:
from gpiozero import Button
from gpiozero import OutputDevice
from time import sleep
relay1 = OutputDevice(17)
button = Button(18)
x = 0
relay1.off()
while x < 1:
print("Lights off...")
button.wait_for_press()
print("The button was pressed!")
relay1.on()
print("Lights on...")
print("Waiting...")
sleep(60)
relay1.off()
I tried leaving the red portion out of the code in Dataplicity, but I am not sure how to get it to include the relay portion
Right now, under custom actions on my phone it says Control LEDs
Green LED and then it shows a spinning wheel that never loads
Any suggestions are appreciated =(
Could it be that at the end of the button press you're not returning an "OK" status for the custom action?
Similarly as per the example in the document that you linked:
# Custom Action executed succesfully
echo "[[[ReturnOK]]]"
Best regards,
Dataplicity 🤗
I have a task that are hiding a dialog but I need to click the button belong to this dialog to
implement some function before go to the next dialog.
But when I hide this dialog, I can't click the button. Is there any way to implement this button without On_Bn_Clicked() event? I mean that when the dialog is called, the button is also activated.
Thank for the helps.
When you click the button a few Windows messages are sent. The important ones are WM_LBUTTONDOWN, WM_LBUTTONUP which tells the button you clicked the left mouse button down and up. Then some time later a WM_COMMAND message is sent to the parent window to handle the button click. At that point your ON_COMMAND() MFC handler is called. MFC abstracts this all away from you for the most part.
You could go and simulate this using the Win32 SendMessage API but if the message pump is blocking your button may not be clicked when you think it will. If you want a quick answer to your question then this is an approach to "get it done". It would look something like this:
SendMessage(button.GetSafeHwnd(), WM_LBUTTONDOWN, MK_LBUTTON, 0);
SendMessage(button.GetSafeHwnd(), WM_LBUTTONUP, MK_LBUTTON, 0);
I think a more sensible approach is to take the code that is in this On_Bn_Clicked() event handler and simply move it to a reusable function. This way you can call the code in On_Bn_Clicked() from anywhere in your program.
Just call On_Bn_Clicked() directly from your code. There is no harm in doing so. (I suppose you don't want to actually click the hidden button with the mouse...)
I developed https://play.google.com/store/apps/details?id=com.kunert.einsteinstictactoe. When the user hits the Chromecast button the first time he is able to connect to a Chromecast device. If he hits it the second time after being connected he is able to adjust the volume or to disconnect.
As my application currently doesn't support sound I want to hide the volume adjustment.
Is this possible?
It is possible. You need to define your own MediaRouteDialogFactory and in there, return your own MediaRouteControllerDialogFragment implementation. In your implementation of that fragment, in onCreateControllerDialog, you need to set setVolumeControlEnabled(false). See the package com.google.sample.castcompanionlibrary.cast.dialog.video in CCL which has all of these for its implementation.
I have been trying to play a sound on my laptop by pressing a homemade button on my Arduino.
Now I found this example code to play a file with Minim.
I want to know where I can trigger the button in the code, to play the sound.
Can somebody help me?
Try By looking here Arduino and Processing
This page explains communication examples and how to communicate between arduino and processing
And then you just need to call "player.play();"(from the example you need to remove it)
once the user presses the button
Looking through the documentation, it seems that the new advanced gestures API doesn't determine the direction of a swipe beyond the basic { left, right, up, down }.
I need the start point of the swipe and the direction.
Is there anyway to retrieve this other than coding my own advanced gesture library from scratch out the basic gestures?
And if this is my only option, could anyone point me to some open source code that does this?
Got it! Documentation is here, under 'Creating Custom Gesture Recognizers' at the bottom.
Basically the six gestures Apple provides all derive from UIGestureRecognizer, and you can make your own gesture recogniser in the same way.
then, inside your view's init, you hook up your recogniser. and just the act of hooking it up automatically reroutes incoming touch events.
Actually, the default behaviour is to make your recogniser an Observer of these events. Which means your view gets them as it used to, and in addition if your recogniser spots a gesture it will trigger your myCustomEventHandler method inside your view (you passed its selector when you hooked up your recogniser).
But sometimes you want to prevent the original touch events from reaching the view, and you can fiddle around in your recogniser to do that. so it's a bit misleading to think of it as an ' observer '.
There is one other scenario, where one gesture needs to eat another. Like you can't just send back a single click if your view is also primed to receive double clicks. You have to wait for the double-click recogniser to report failure. and if it is successful, you need to fail the single click -- obviously you don't want to send both back!