I’m wondering how do I go about establishing a serial port communication between Arduino Uno (as my input) and Godot game engine (as my output)? Context is, I'm trying to use sensors on Arduino to move/place objects on Godot, thus I need the serial port code or guidance on how to go about achieving that connection.
Any help is appreciated, thank you everyone!
You can try plugin GDSerCommPlugin. But it is work in progress and i didnt test it so i dont know if it is usable.
Since it's a long time this question was asked, I think that an update is welcome.
Since Godot 3.0, C# is fully supported, I'm using Godot 3.3.2 and my conclusion about dealing with serial communication in Godot is : use C#
You'll keep all the multi platform development advantage of Godot without writing a single (low level) line of code, here is a snipped:
using System.IO.Ports; // for .NET SerialPort
Then in your class if you want to open the port (assuming "SerialPort port" is defined):
port = new SerialPort(portName, (Int32)baudRate, Parity.None, 8, StopBits.One);
Then use the _Process to read the data:
public override void _Process(float delta)
{
if (port != null)
{
if (port.BytesToRead > 0)
{
string serData = port.ReadExisting();
string[] str = serData.Split('\n'); // Just an example
parseLines(str); // Your implementation
}
}
}
Note1: "DataReceived" event from SerialPort doesn't work (I don't know why but I think this is due to the different context) but with Godot this function is useless since you can check and parse directly the data in the _Process function.
Note2: For efficiency it's better to use the ReadExisting() function (and parse lines yourself) than the Readline() function in a loop in the _Process function.
Note3: Sending to serial port is immediate eg:
port.Write("Hello\r");
Related
I want to use QAudioRecorder to record audio from the user and then use the audio output file for speech to text. I could successfully run and record audio from this example, http://doc.qt.io/qt-5/qtmultimedia-multimedia-audiorecorder-example.html.
But my problem is, I need to detect if user has stopped speaking while QAudioRecorder is actively recording audio. So QAudioRecorder should only stop when user is not speaking.
I could stop QAudioRecorder for fixed seconds using QTimer as below:
void AudioRecorder::toggleRecord()
{
if (audioRecorder->state() == QMediaRecorder::StoppedState) {
audioRecorder->setAudioInput(boxValue(ui->audioDeviceBox).toString());
QAudioEncoderSettings settings;
settings.setCodec(boxValue(ui->audioCodecBox).toString());
settings.setSampleRate(boxValue(ui->sampleRateBox).toInt());
settings.setBitRate(boxValue(ui->bitrateBox).toInt());
settings.setChannelCount(boxValue(ui->channelsBox).toInt());
settings.setQuality(QMultimedia::EncodingQuality(ui->qualitySlider->value()));
settings.setEncodingMode(ui->constantQualityRadioButton->isChecked() ?
QMultimedia::ConstantQualityEncoding :
QMultimedia::ConstantBitRateEncoding);
QString container = boxValue(ui->containerBox).toString();
audioRecorder->setEncodingSettings(settings, QVideoEncoderSettings(), container);
audioRecorder->record();
this->recordTimeout();
}
else {
this->stopRecording();
}
}
void AudioRecorder::recordTimeout()
{
QTimer* mTimer = new QTimer(this);
mTimer->setSingleShot(true);
connect(mTimer, SIGNAL(timeout()), SLOT(stopRecording()));
mTimer->start(6000);
}
void AudioRecorder::stopRecording()
{
audioRecorder->stop();
}
But instead of this it should stop recording when user is not speaking. The QAudioProbe class has this signal audioBufferProbed(QAudioBuffer) which may be helpful to check level of audio but I don't know how to use it and what level can be used to detect if user is not speaking.
I've been trying to do more or less the same thing for a while now. There is an example - https://doc.qt.io/qt-5/qtdatavisualization-audiolevels-example.html that shows you how implement an audio level meter which should be helpful. The example uses QAudioInput. Specifically, it uses QAudioInput::start(QIODevice * device) and passes a custom QIODevice to implement the audio level meter. The problem with this approach using QAudioInput is once you've got the data, it's not easy to encode it and write it out to file where as with QAudioRecorder it's simple.
Anyway ... your right QAudioProbe is your best bet if you want to record the easy way with QAudioRecorder. I adapted the Qt audio level meter example to work with QAudioProbe instead of QAudioInput/QIODevice. See - https://gist.github.com/sam-at-github/bf66e84105cc3e23e7113cca5e3b1772.
One minor issue the level meter needs QAudioFormat but QAudioRecorder only provides you with a QEncoderSettings (Should probably fix code to use the latter. I don't know why both QEncoderSettings and QAudioFormat need to exist ...). You just gotta get a QAudioDeviceInfo for the device your using then use QAudioDeviceInfo::preferredFormat().
Relaed Post: Qt: API to write raw QAudioInput data to file just like QAudioRecorder
Short version:
In my tests with Android 5.0 Lollipop I have noticed android.bluetooth.le.BluetoothLeScanner detects BLE devices less frequently than Android 4.4 KitKat. Why is this and is there an alternative?
Long version:
I am developing an Android application, specifically for the Nexus 7 tablet, that focuses on detecting Bluetooth Low Energy (BLE) devices. The app is mainly interested in the RSSI value of the beacons, to determine their proximity to the tablet. This means I won't need to connect to the BLE device, since the RSSI value is passed to the scan callback when the device is detected.
In Android 4.4 KitKat, when I call BluetoothAdapter.startLeScan(LeScanCallback), my callback gets called only ONCE for every detected BLE device. (I have seen some discussions claim that this behaviour can differ per device) However, I am interested in the constantly changing RSSI value, so the currently recommended way is to continuously do startLeScan and stopLeScan with a set interval (250ms in my case):
public class TheOldWay {
private static final int SCAN_INTERVAL_MS = 250;
private Handler scanHandler = new Handler();
private boolean isScanning = false;
public void beginScanning() {
scanHandler.post(scanRunnable);
}
private Runnable scanRunnable = new Runnable() {
#Override
public void run() {
BluetoothAdapter adapter = BluetoothAdapter.getDefaultAdapter();
if (isScanning) {
adapter.stopLeScan(leScanCallback);
} else if (!adapter.startLeScan(leScanCallback)) {
// an error occurred during startLeScan
}
isScanning = !isScanning;
scanHandler.postDelayed(this, SCAN_INTERVAL_MS);
}
};
private BluetoothAdapter.LeScanCallback leScanCallback = new BluetoothAdapter.LeScanCallback() {
#Override
public void onLeScan(BluetoothDevice device, int rssi, byte[] scanRecord) {
// use the RSSI value
}
};
}
Essentially this gives me the required results, but this process is very resource intensive and eventually leads to an unresponsive bluetooth adapter.
For these reasons I upgraded my Nexus 7 to Android 5.0 Lollipop, to see whether my BLE issues would be fixed. In Lollipop BluetoothAdapter.startLeScan(LeScanCallback) is deprecated and replaced with a new API that allows for some more control over the scanning process. From my first tests, it appears startScan does not continuously call my callback (on my Nexus 7) when the RSSI values change, so I still need to use the startScan / stopScan implementation:
#TargetApi(21)
public class TheNewWay {
private static final int SCAN_INTERVAL_MS = 250;
private Handler scanHandler = new Handler();
private List<ScanFilter> scanFilters = new ArrayList<ScanFilter>();
private ScanSettings scanSettings;
private boolean isScanning = false;
public void beginScanning() {
ScanSettings.Builder scanSettingsBuilder = new ScanSettings.Builder();
scanSettingsBuilder.setScanMode(ScanSettings.SCAN_MODE_LOW_LATENCY);
scanSettings = scanSettingsBuilder.build();
scanHandler.post(scanRunnable);
}
private Runnable scanRunnable = new Runnable() {
#Override
public void run() {
BluetoothLeScanner scanner = BluetoothAdapter.getDefaultAdapter().getBluetoothLeScanner();
if (isScanning) {
scanner.stopScan(scanCallback);
} else {
scanner.startScan(scanFilters, scanSettings, scanCallback);
}
isScanning = !isScanning;
scanHandler.postDelayed(this, SCAN_INTERVAL_MS);
}
};
private ScanCallback scanCallback = new ScanCallback() {
#Override
public void onScanResult(int callbackType, ScanResult result) {
super.onScanResult(callbackType, result);
int rssi = result.getRssi();
// do something with RSSI value
}
#Override
public void onScanFailed(int errorCode) {
super.onScanFailed(errorCode);
// a scan error occurred
}
};
}
As you can see, I have configured the scanner using the ScanSettings class, which allows you to set the scanMode. I use ScanSettings.SCAN_MODE_LOW_LATENCY, which has the following documentation: "Scan using highest duty cycle. It's recommended to only use this mode when the application is running in the foreground." Sounds exactly like what I want, but unfortunately I only get a beacon detect every 15 - 30 seconds, where the KitKat version shows me the same beacon every 1 - 2 seconds on this scan interval.
Do you have any idea what could be the reason for this difference? Am I missing something, maybe some new settings? Are there alternative ways of doing the above?
Thanks a lot in advance!
Abel
PS: I wanted to include more links to resources I've used, but I don't have the rep points for it yet.
I have gotten very different results with a Nexus 5 running the new Android 5.0 scanning APIs. Detections of BLE packets came in at near real time when using SCAN_MODE_LOW_LATENCY, at every 100ms for BLE beacons transmitting at 10Hz.
You can read the full results here:
http://developer.radiusnetworks.com/2014/10/28/android-5.0-scanning.html
These tests are based off of running the open source Android Beacon Library 2.0's experimental android-l-apis branch here.
It is not obvious what the difference is in your test results, but it is possible that starting and stopping scanning is changing the results.
EDIT: it is possible the hardware is the difference. See a report of similar timings on the Nexus 4: https://github.com/AltBeacon/android-beacon-library/issues/59#issuecomment-64281446
I don't have 50 reputation for a comment yet, so bear with me, this comment will be in the form of an answer. In your code, shouldn't this part:
if (isScanning) {
scanner.startScan(...)
be this instead:
if (!isScanning) {
scanner.startScan(...)
Because following your code, you're calling stopScan() before starting a scan. It may not have a direct effect on the result if the stopScan() method is idempotent/safe. But you know, for the sake of code intelligibility you should edit the question. And do the same to your code, sometimes byzantine things are at play ;)
Have you tried larger values for SCAN_INTERVAL_MS? If yes, how large?
I have experienced very similar results with my Nexus 4, in both KitKat and Lollipop.
With KitKat the bluetooth adapter also eventually went unresponsive; at first I though that it could be related to a short scan interval (200ms) but increasing that number to even a second didn't help, in that matter I found that, when unresponsive disabling and enabling the adapter programmatically, sometimes solves the problem. Unfortunately I can't say that it works all the time.
Now with Lollipop, in which I had high hopes to solve this issues, I experienced the same behaviour that you describe. I also had to use the startScan / stopScan implementation, getting similar results regarding the detection times. Sadly, I haven't found a work around to get results more quickly.
Based on what you describe I suppose it could be a hardware issue, even though the Nexus 7 and Nexus 4 are from different manufacturers (Asus and LG).
I know I'm not providing much help here besides trying to answer your question about you missing something; I don't think so, I think the problem is something like the hardware or the bluetooth API that still doesn't behave the way it should across different devices.
Beyond API 21 android uses SCAN_MODE_LOW_POWER by default.
SCAN_MODE_LOW_POWER
Try SCAN_MODE_BALANCED and see if it gets better.
SCAN_MODE_BALANCED
if you search for BW13_DayOne_Session1 Bluetooth Advanced on google, you will find a pdf document that gives you the latencies for devices based on the settings for discovery (see page 8). I'm guessing your problem has to do with these timings. You can verify by figuring out the advertising configuration for the device you are testing (Adv Int, Duty Cycle) then figure out what the API settings are doing for configuring the scan interval, etc. Once you have these, you can then use that table to interpolate to see if your getting the results you expect.
I know this is a software site, but often when interfacing with hardware you need to know the protocol otherwise your shooting in the dark.
I was thinking about a best approach to properly handle the serial port communication in my program. I have some device that sends me data, im reciving it using DataRecieved event and ReadExisting method in it. Everything that it reads is being put inside a buffer, when last line equals some string then i start to parse it into some kind of packet.
Aside from that, when i send data to this device and wait for response, i mark flag
bool isReady = false;
while(!isReady)
Thread.Sleep(500);
And in data parsing method i set this flag to true so when I recieve packet data, code can jump out of this loop and continue to work as needed. But in my opinion this is not a clean/good way to do this. And there is a problem sometimes, if device will not send the packet I need, so program is being stuck in the loop forever.
I was wondering, how would you guys resolve this case in your code?
Thanks, and sorry for bad english.
Don't just wait for a response in an endless loop. Use the events like you previously mentioned to get the data. See this other SO question also. SerialPort class and DataReceived event... Getting bytes. Use ReadLine or ReadExisting? Any examples?
For now i've added a 5sec timeout using following code:
bool isReady = false;
DateTime timeout = DateTime.Now;
while(!isReady)
{
Thread.Sleep(500);
if((DateTime.Now-timeout).TotalMiliseconds >= 5000)
break;
}
So when no response is recieved then it just jumps out of this loop. This is solving one of my problems, but still I would like to know other ways to handle this. If you guys have any ideas, please share them.
How I can write data in serial port, with delay between send's messages?
This is my code:
void MainWindow::on_pushButton_Done_clicked()
{
if(sport->isOpen()){
sport->clear();
QString cmd = Phase+Mode;
//Write Stop
sport->write("stop!", 5);
//Write Mode
sport->write(cmd.toStdString().c_str(), cmd.toStdString().length());
//Write Speed
sport->write(Speed.toStdString().c_str(), Speed.toStdString().length());
//Write Direction
sport->write(Direction.toStdString().c_str(), Direction.toStdString().length());
//Run
sport->write("start!", 6);
}
}
My device receives an error message when I call this function.
Thank you.
2 options:
use waitForBytesWritten to ensure the bytes are written and then a short sleep
however this will block the thread and will block the gui
the other is using a QTimer to trigger another slot a few times and a field that will indicate what needs to be sent
Looks like you are trying to program some step motor controller or something similar.
Usually in such controllers you should wait for controller response to verify that command was processed properly.
It looks like that your design of code is very bad. Move everything related with this controller to separate class, which has set of slots, something like: starRotateLeftWithSpeed(double). Code will be cleaner and it will be easy to use thread if you decide to use methods like waitForBytesWritten proposed in another answer.
Definitely you should read controller manual more carefully.
Lets assume I am a game and I have a global int* that contains my health. A game trainer's job is to modify this value to whatever in order to achieve god mode. I've looked up tutorials on game trainers to understand how they work, and the general idea is to use a memory scanner to try and find the address of a certain value. Then modify this address by injecting a dll or whatever.
But I made a simple program with a global int* and its address changes every time I run the app, so I don't get how game trainers can hard code these addresses? Or is my example wrong?
What am I missing?
The way this is usually done is by tracing the pointer chain from a static variable up to the heap address containing the variable in question. For example:
struct CharacterStats
{
int health;
// ...
}
class Character
{
public:
CharacterStats* stats;
// ...
void hit(int damage)
{
stats->health -= damage;
if (stats->health <= 0)
die();
}
}
class Game
{
public:
Character* main_character;
vector<Character*> enemies;
// ...
}
Game* game;
void main()
{
game = new Game();
game->main_character = new Character();
game->main_character->stats = new CharacterStats;
// ...
}
In this case, if you follow mikek3332002's advice and set a breakpoint inside the Character::hit() function and nop out the subtraction, it would cause all characters, including enemies, to be invulnerable. The solution is to find the address of the "game" variable (which should reside in the data segment or a function's stack), and follow all the pointers until you find the address of the health variable.
Some tools, e.g. Cheat Engine, have functionality to automate this, and attempt to find the pointer chain by themselves. You will probably have to resort to reverse-engineering for more complicated cases, though.
Discovery of the access pointers is quite cumbersome and static memory values are difficult to adapt to different compilers or game versions.
With API hooking of malloc(), free(), etc. there is a different method than following pointers. Discovery starts with recording all dynamic memory allocations and doing memory search in parallel. The found heap memory address is then reverse matched against the recorded memory allocations. You get to know the size of the object and the offset of your value within the object. You repeat this with backtracing and get the jump-back code address of a malloc() call or a C++ constructor. With that information you can track and modify all objects which get allocated from there. You dump the objects and compare them and find a lot more interesting values. E.g. the universal elite game trainer "ugtrain" does it like this on Linux. It uses LD_PRELOAD.
Adaption works by "objdump -D"-based disassembly and just searching for the library function call with the known memory size in it.
See: http://en.wikipedia.org/wiki/Trainer_%28games%29
Ugtrain source: https://github.com/sriemer/ugtrain
The malloc() hook looks like this:
static __thread bool no_hook = false;
void *malloc (size_t size)
{
void *mem_addr;
static void *(*orig_malloc)(size_t size) = NULL;
/* handle malloc() recursion correctly */
if (no_hook)
return orig_malloc(size);
/* get the libc malloc function */
no_hook = true;
if (!orig_malloc)
*(void **) (&orig_malloc) = dlsym(RTLD_NEXT, "malloc");
mem_addr = orig_malloc(size);
/* real magic -> backtrace and send out spied information */
postprocess_malloc(size, mem_addr);
no_hook = false;
return mem_addr;
}
But if the found memory address is located within the executable or a library in memory, then ASLR is likely the cause for the dynamic. On Linux, libraries are PIC (position-independent code) and with latest distributions all executables are PIE (position-independent executables).
EDIT: never mind it seems it was just good luck, however the last 3 numbers of the pointer seem to stay the same. Perhaps this is ASLR kicking in and changing the base image address or something?
aaahhhh my bad, i was using %d for printf to print the address and not %p. After using %p the address stayed the same
#include <stdio.h>
int *something = NULL;
int main()
{
something = new int;
*something = 5;
fprintf(stdout, "Address of something: %p\nValue of something: %d\nPointer Address of something: %p", &something, *something, something);
getchar();
return 0;
}
Example for a dynamicaly allocated varible
The value I want to find is the number of lives to stop my lives from being reduced to 0 and getting game over.
Play the Game and search for the location of the lifes variable this instance.
Once found use a disassembler/debugger to watch that location for changes.
Lose a life.
The debugger should have reported the address that the decrement occurred.
Replace that instruction with no-ops
Got this pattern from the program called tsearch
A few related websites found from researching this topic:
http://deviatedhacking.com/index.php?/topic/75-dynamic-memory-allocation/
http://www.edgeofnowhere.cc/viewforum.php?f=183
http://www.oldschoolhack.de/tutorials/Theories%20and%20methods%20of%20code-caves.htm
http://webcache.googleusercontent.com/search?q=cache:4wzMzFIZx54J:gamehacking.com/forums/tutorials-beginners/11597-c-making-game-trainer.html+reading+a+dynamic+memory+address+game+trainer&cd=2&hl=en&ct=clnk&gl=au&client=firefox-a (A google cache version)
http://www.codeproject.com/KB/cpp/codecave.aspx
The way things like Gameshark codes were figured out were by dumping the memory image of the application, then doing one thing, then looking to see what changed. There might be a few things changing, but there should be patterns to look for. E.g. dump memory, shoot, dump memory, shoot again, dump memory, reload. Then look for changes and get an idea for where/how ammo is stored. For health it'll be similar, but a lot more things will be changing (since you'll be moving at the very least). It'll be easiest though to do it when minimizing the "external effects," e.g. don't try to diff memory dumps during a firefight because a lot is happening, do your diffs while standing in lava, or falling off a building, or something of that nature.