Sometimes I am getting "underrun occured" from ALSA lib and that means the audioouput is not getting the values on time to play. Alsa then repeats the old buffer values on the speaker.
How can I avoid underruns on QAudioOuput?
I am using Qt5.9.1 and ARM Based CPU running on Debian 8.
I tried to change the buffersize:
audioOutput->setBufferSize(144000);
qDebug()<<"buffersize "<<audioOutput->bufferSize()<<" period size" .
<<audioOutput->periodSize();
I get: buffersize 144000 period size 0
and after audiOutput->start() I get: buffersize 19200 period size 3840
Here is what I am doing:
audioOutput->setBufferSize(144000);
qDebug()<<"buffersize "<<audioOutput->bufferSize()<<" period size" .
<<audioOutput->periodSize();
m_audioInput = audioInput->start();
m_audioOutput = audioOutput->start();
qDebug()<<"buffersize "<<audioOutput->bufferSize()<<" period size"<
<<audioOutput->periodSize();
connect(m_audioInput, SIGNAL(readyRead()), SLOT(readBufferSlot()));
Once audio data gets recorded I write to the QIODevice m_audioOutput the values from QIODevice m_audioInput.
So I think I have a timing issue sometimes and the audio interval for both is 1000ms before and after start().
Why cant I increase the buffer size? And how can I avoid underrun?
Based on my experience with QAudioOutput, it's buffer is intended just to keep real-time playing, you can't for example drop 1 minute of sound directly to the QIODevice expecting it gets buffered and played sequentially, but it do not means that you can't buffer sound, just means that you need to do it by yourself.
I made the following example in "C-Style" to make an all-in-one solution, it buffers 1000 milliseconds (1 second) of the input before play it.
The event loop needs to be available to process the Qt SIGNALs.
In my tests, 1 second buffering is fairly enough to avoid under runs.
#include <QtCore>
#include <QtMultimedia>
#define MAX_BUFFERED_TIME 1000
static inline int timeToSize(int ms, const QAudioFormat &format)
{
return ((format.channelCount() * (format.sampleSize() / 8) * format.sampleRate()) * ms / 1000);
}
struct AudioContext
{
QAudioInput *m_audio_input;
QIODevice *m_input_device;
QAudioOutput *m_audio_output;
QIODevice *m_output_device;
QByteArray m_buffer;
QAudioDeviceInfo m_input_device_info;
QAudioDeviceInfo m_output_device_info;
QAudioFormat m_format;
int m_time_to_buffer;
int m_max_size_to_buffer;
int m_size_to_buffer;
bool m_buffer_requested = true; //Needed
bool m_play_called = false;
};
void play(AudioContext *ctx)
{
//Set that last async call was triggered
ctx->m_play_called = false;
if (ctx->m_buffer.isEmpty())
{
//If data is empty set that nothing should be played
//until the buffer has at least the minimum buffered size already set
ctx->m_buffer_requested = true;
return;
}
else if (ctx->m_buffer.size() < ctx->m_size_to_buffer)
{
//If buffer doesn't contains enough data,
//check if exists a already flag telling that the buffer comes
//from a empty state and should not play anything until have the minimum data size
if (ctx->m_buffer_requested)
return;
}
else
{
//Buffer is ready and data can be played
ctx->m_buffer_requested = false;
}
int readlen = ctx->m_audio_output->periodSize();
int chunks = ctx->m_audio_output->bytesFree() / readlen;
//Play data while it's available in the output device
while (chunks)
{
//Get chunk from the buffer
QByteArray samples = ctx->m_buffer.mid(0, readlen);
int len = samples.size();
ctx->m_buffer.remove(0, len);
//Write data to the output device after the volume was applied
if (len)
{
ctx->m_output_device->write(samples);
}
//If chunk is smaller than the output chunk size, exit loop
if (len != readlen)
break;
//Decrease the available number of chunks
chunks--;
}
}
void preplay(AudioContext *ctx)
{
//Verify if exists a pending call to play function
//If not, call the play function async
if (!ctx->m_play_called)
{
ctx->m_play_called = true;
QTimer::singleShot(0, [=]{play(ctx);});
}
}
void init(AudioContext *ctx)
{
/***** INITIALIZE INPUT *****/
//Check if format is supported by the choosen input device
if (!ctx->m_input_device_info.isFormatSupported(ctx->m_format))
{
qDebug() << "Format not supported by the input device";
return;
}
//Initialize the audio input device
ctx->m_audio_input = new QAudioInput(ctx->m_input_device_info, ctx->m_format, qApp);
ctx->m_input_device = ctx->m_audio_input->start();
if (!ctx->m_input_device)
{
qDebug() << "Failed to open input audio device";
return;
}
//Call the readyReadPrivate function when data are available in the input device
QObject::connect(ctx->m_input_device, &QIODevice::readyRead, [=]{
//Read sound samples from input device to buffer
ctx->m_buffer.append(ctx->m_input_device->readAll());
preplay(ctx);
});
/***** INITIALIZE INPUT *****/
/***** INITIALIZE OUTPUT *****/
//Check if format is supported by the choosen output device
if (!ctx->m_output_device_info.isFormatSupported(ctx->m_format))
{
qDebug() << "Format not supported by the output device";
return;
}
int internal_buffer_size;
//Adjust internal buffer size
if (ctx->m_format.sampleRate() >= 44100)
internal_buffer_size = (1024 * 10) * ctx->m_format.channelCount();
else if (ctx->m_format.sampleRate() >= 24000)
internal_buffer_size = (1024 * 6) * ctx->m_format.channelCount();
else
internal_buffer_size = (1024 * 4) * ctx->m_format.channelCount();
//Initialize the audio output device
ctx->m_audio_output = new QAudioOutput(ctx->m_output_device_info, ctx->m_format, qApp);
//Increase the buffer size to enable higher sample rates
ctx->m_audio_output->setBufferSize(internal_buffer_size);
//Compute the size in bytes to be buffered based on the current format
ctx->m_size_to_buffer = int(timeToSize(ctx->m_time_to_buffer, ctx->m_format));
//Define a highest size that the buffer are allowed to have in the given time
//This value is used to discard too old buffered data
ctx->m_max_size_to_buffer = ctx->m_size_to_buffer + int(timeToSize(MAX_BUFFERED_TIME, ctx->m_format));
ctx->m_output_device = ctx->m_audio_output->start();
if (!ctx->m_output_device)
{
qDebug() << "Failed to open output audio device";
return;
}
//Timer that helps to keep playing data while it's available on the internal buffer
QTimer *timer_play = new QTimer(qApp);
timer_play->setTimerType(Qt::PreciseTimer);
QObject::connect(timer_play, &QTimer::timeout, [=]{
preplay(ctx);
});
timer_play->start(10);
//Timer that checks for too old data in the buffer
QTimer *timer_verifier = new QTimer(qApp);
QObject::connect(timer_verifier, &QTimer::timeout, [=]{
if (ctx->m_buffer.size() >= ctx->m_max_size_to_buffer)
ctx->m_buffer.clear();
});
timer_verifier->start(qMax(ctx->m_time_to_buffer, 10));
/***** INITIALIZE OUTPUT *****/
qDebug() << "Playing...";
}
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
AudioContext ctx;
QAudioFormat format;
format.setCodec("audio/pcm");
format.setSampleRate(44100);
format.setChannelCount(1);
format.setSampleSize(16);
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::SignedInt);
ctx.m_format = format;
ctx.m_input_device_info = QAudioDeviceInfo::defaultInputDevice();
ctx.m_output_device_info = QAudioDeviceInfo::defaultOutputDevice();
ctx.m_time_to_buffer = 1000;
init(&ctx);
return a.exec();
}
Related
I'm trying to use a teensy 4.1 as an interface between an encoder and ROS thanks to micro-ros (arduino version).
I would like to publish position of a wheel to the /jointState topic with the teensy but there is no example on the micro-ros arduino Github repo.
I've tried to inspect the sensormsgs/msg/jointState message struct but everything is a bit fuzzy and I don't understand how to make it works. I can't understand what is rosidl_runtime_c__double__Sequence type.
I've tried several things but I always get an error about operand types
no match for 'operator=' (operand types are 'rosidl_runtime_c__String' and 'const char [18]')
msg.name.data[0] = "drivewhl_1g_joint";
Here is my arduino code
#include <micro_ros_arduino.h>
#include <stdio.h>
#include <rcl/rcl.h>
#include <rcl/error_handling.h>
#include <rclc/rclc.h>
#include <rclc/executor.h>
#include <sensor_msgs/msg/joint_state.h>
rcl_publisher_t publisher;
sensor_msgs__msg__JointState msg;
rclc_executor_t executor;
rclc_support_t support;
rcl_allocator_t allocator;
rcl_node_t node;
rcl_timer_t timer;
#define LED_PIN 13
#define RCCHECK(fn) { rcl_ret_t temp_rc = fn; if((temp_rc != RCL_RET_OK)){error_loop();}}
#define RCSOFTCHECK(fn) { rcl_ret_t temp_rc = fn; if((temp_rc != RCL_RET_OK)){}}
void error_loop(){
while(1){
digitalWrite(LED_PIN, !digitalRead(LED_PIN));
delay(100);
}
}
void timer_callback(rcl_timer_t * timer, int64_t last_call_time)
{
RCLC_UNUSED(last_call_time);
if (timer != NULL) {
RCSOFTCHECK(rcl_publish(&publisher, &msg, NULL));
//
//Do not work
//msg.name=["drivewhl_1g_joint","drivewhl_1d_joint","drivewhl_2g_joint","drivewhl_2d_joint"];
//msg.position=["1.3","0.2", "0","0"];
msg.name.size = 1;
msg.name.data[0] = "drivewhl_1g_joint";
msg.position.size = 1;
msg.position.data[0] = 1.85;
}
}
void setup() {
set_microros_transports();
pinMode(LED_PIN, OUTPUT);
digitalWrite(LED_PIN, HIGH);
delay(2000);
allocator = rcl_get_default_allocator();
//create init_options
RCCHECK(rclc_support_init(&support, 0, NULL, &allocator));
// create node
RCCHECK(rclc_node_init_default(&node, "micro_ros_arduino_node", "", &support));
// create publisher
RCCHECK(rclc_publisher_init_default(
&publisher,
&node,
ROSIDL_GET_MSG_TYPE_SUPPORT(sensor_msgs, msg, JointState),
"JointState"));
// create timer,
const unsigned int timer_timeout = 1000;
RCCHECK(rclc_timer_init_default(
&timer,
&support,
RCL_MS_TO_NS(timer_timeout),
timer_callback));
// create executor
RCCHECK(rclc_executor_init(&executor, &support.context, 1, &allocator));
RCCHECK(rclc_executor_add_timer(&executor, &timer));
}
void loop() {
delay(100);
RCSOFTCHECK(rclc_executor_spin_some(&executor, RCL_MS_TO_NS(100)));
}
I'm a beginner with Ros and C, it may be a very dumb question but I don't know how to solve it. Thanks for your help !
rosidl_runtime_c__String__Sequence is a structure used to old string data that is to be transmitted. Specifically it is a sequence of rosidl_runtime_c__String data. You're running into an error because rosidl_runtime_c__String is also a struct itself with no custom operators defined. Thus, your assignment fails since the types are not directly convertible. What you need to do instead is use the rosidl_runtime_c__String.data field. You can see slightly more info here
void timer_callback(rcl_timer_t * timer, int64_t last_call_time)
{
RCLC_UNUSED(last_call_time);
if (timer != NULL) {
//msg.name=["drivewhl_1g_joint","drivewhl_1d_joint","drivewhl_2g_joint","drivewhl_2d_joint"];
//msg.position=["1.3","0.2", "0","0"];
msg.name.size = 1;
msg.name.data[0].data = "drivewhl_1g_joint";
msg.name.data[0].size = 17; //Size in bytes excluding null terminator
msg.position.size = 1;
msg.position.data[0] = 1.85;
RCSOFTCHECK(rcl_publish(&publisher, &msg, NULL));
}
}
I also spent quite some time trying to get publishing JointState message from my esp32 running microros, and also couldn't find working example. Finally, i was successful, maybe it will help someone.
In simple words:
.capacity contains max number of elements
.size contains actual number of elements (strlen in case of string)
.data should be allocated as using malloc as .capacity * sizeof()
each string within sequence should be allocated separately
This is my code that allocates memory for 12 joints, named j0-j11. Good luck!
...
// Declarations
rcl_publisher_t pub_joint;
sensor_msgs__msg__JointState joint_state_msg;
...
// Create publisher
RCCHECK(rclc_publisher_init_default(&pub_joint, &node,
ROSIDL_GET_MSG_TYPE_SUPPORT(sensor_msgs, msg, JointState),
"/hexapod/joint_state"));
//Allocate memory
joint_state_msg.name.capacity = 12;
joint_state_msg.name.size = 12;
joint_state_msg.name.data = (std_msgs__msg__String*) malloc(joint_state_msg.name.capacity*sizeof(std_msgs__msg__String));
for(int i=0;i<12;i++) {
joint_state_msg.name.data[i].data = malloc(5);
joint_state_msg.name.data[i].capacity = 5;
sprintf(joint_state_msg.name.data[i].data,"j%d",i);
joint_state_msg.name.data[i].size = strlen(joint_state_msg.name.data[i].data);
}
joint_state_msg.position.size=12;
joint_state_msg.position.capacity=12;
joint_state_msg.position.data = malloc(joint_state_msg.position.capacity*sizeof(double));
joint_state_msg.velocity.size=12;
joint_state_msg.velocity.capacity=12;
joint_state_msg.velocity.data = malloc(joint_state_msg.velocity.capacity*sizeof(double));
joint_state_msg.effort.size=12;
joint_state_msg.effort.capacity=12;
joint_state_msg.effort.data = malloc(joint_state_msg.effort.capacity*sizeof(double));
for(int i=0;i<12;i++) {
joint_state_msg.position.data[i]=0.0;
joint_state_msg.velocity.data[i]=0.0;
joint_state_msg.effort.data[i]=0.0;
}
....
//Publish
RCSOFTCHECK(rcl_publish(&pub_joint, &joint_state_msg, NULL));
I've written this code to receive a series of char variable through USART6 and have them stored in a string. But the problem is first received value is just a junk! Any help would be appreciated in advance.
while(1)
{
//memset(RxBuffer, 0, sizeof(RxBuffer));
i = 0;
requestRead(&dt, 1);
RxBuffer[i++] = dt;
while (i < 11)
{
requestRead(&dt, 1);
RxBuffer[i++] = dt;
HAL_Delay(5);
}
function prototype
static void requestRead(char *buffer, uint16_t length)
{
while (HAL_UART_Receive_IT(&huart6, buffer, length) != HAL_OK)
HAL_Delay(10);
}
First of all, the HAL_Delay seems to be redundant. Is there any particular reason for it?
The HAL_UART_Receive_IT function is used for non-blocking mode. What you have written seems to be more like blocking mode, which uses the HAL_UART_Receive function.
Also, I belive you need something like this:
Somewhere in the main:
// global variables
volatile uint8_t Rx_byte;
volatile uint8_t Rx_data[10];
volatile uint8_t Rx_indx = 0;
HAL_UART_Receive_IT(&huart1, &Rx_byte, 1);
And then the callback function:
void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart)
{
if (huart->Instance == UART1) { // Current UART
Rx_data[Rx_indx++] = Rx_byte; // Add data to Rx_Buffer
}
HAL_UART_Receive_IT(&huart1, &Rx_byte, 1);
}
The idea is to receive always only one byte and save it into an array. Then somewhere check the number of received bytes or some pattern check, etc and then process the received frame.
On the other side, if the number of bytes is always same, you can change the "HAL_UART_Receive_IT" function and set the correct bytes count.
I want to use pre-made files instead of live capture in the following program to track the person.
what is realsense SDK API used to load pre-made files and catch the frame by frame?
Is it possible to use to detect/track person any general video/image files which captured using any other camera's ?
Example Program:
Example Source Link
Source
#include <thread>
#include <iostream>
#include <signal.h>
#include "version.h"
#include "pt_utils.hpp"
#include "pt_console_display.hpp"
#include "pt_web_display.hpp"
#include "or_console_display.hpp"
#include "or_web_display.hpp"
using namespace std;
using namespace rs::core;
using namespace rs::object_recognition;
// Version number of the samples
extern constexpr auto rs_sample_version = concat("VERSION: ",RS_SAMPLE_VERSION_STR);
// Doing the OR processing for a frame can take longer than the frame interval, so we
// keep track of whether or not we are still processing the last frame.
bool is_or_processing_frame = false;
unique_ptr<web_display::pt_web_display> pt_web_view;
unique_ptr<web_display::or_web_display> or_web_view;
unique_ptr<console_display::pt_console_display> pt_console_view;
unique_ptr<console_display::or_console_display> or_console_view;
void processing_OR(correlated_sample_set or_sample_set, or_video_module_impl* impl, or_data_interface* or_data,
or_configuration_interface* or_configuration)
{
rs::core::status st;
// Declare data structure and size for results
rs::object_recognition::localization_data* localization_data = nullptr;
//Run object localization processing
st = impl->process_sample_set(or_sample_set);
if (st != rs::core::status_no_error)
{
is_or_processing_frame = false;
return;
}
// Retrieve recognition data from the or_data object
int array_size = 0;
st = or_data->query_localization_result(&localization_data, array_size);
if (st != rs::core::status_no_error)
{
is_or_processing_frame = false;
return;
}
//Send OR data to ui
if (localization_data && array_size != 0)
{
or_console_view->on_object_localization_data(localization_data, array_size, or_configuration);
or_web_view->on_object_localization_data(localization_data, array_size, or_configuration);
}
is_or_processing_frame = false;
}
int main(int argc,char* argv[])
{
rs::core::status st;
pt_utils pt_utils;
rs::core::image_info colorInfo,depthInfo;
rs::core::video_module_interface::actual_module_config actualModuleConfig;
rs::person_tracking::person_tracking_video_module_interface* ptModule = nullptr;
rs::object_recognition::or_video_module_impl impl;
rs::object_recognition::or_data_interface* or_data = nullptr;
rs::object_recognition::or_configuration_interface* or_configuration = nullptr;
cout << endl << "Initializing Camera, Object Recognition and Person Tracking modules" << endl;
if(pt_utils.init_camera(colorInfo,depthInfo,actualModuleConfig,impl,&or_data,&or_configuration) != rs::core::status_no_error)
{
cerr << "Error: Device is null." << endl << "Please connect a RealSense device and restart the application" << endl;
return -1;
}
pt_utils.init_person_tracking(&ptModule);
//Person Tracking Configuration. Set tracking mode to 0
ptModule->QueryConfiguration()->QueryTracking()->Enable();
ptModule->QueryConfiguration()->QueryTracking()->SetTrackingMode((Intel::RealSense::PersonTracking::PersonTrackingConfiguration::TrackingConfiguration::TrackingMode)0);
if(ptModule->set_module_config(actualModuleConfig) != rs::core::status_no_error)
{
cerr<<"error : failed to set the enabled module configuration" << endl;
return -1;
}
//Object Recognition Configuration
//Set mode to localization
or_configuration->set_recognition_mode(rs::object_recognition::recognition_mode::LOCALIZATION);
//Set the localization mechnizm to use CNN
or_configuration->set_localization_mechanism(rs::object_recognition::localization_mechanism::CNN);
//Ignore all objects under 0.7 probabilty (confidence)
or_configuration->set_recognition_confidence(0.7);
//Enabling object center feature
or_configuration->enable_object_center_estimation(true);
st = or_configuration->apply_changes();
if (st != rs::core::status_no_error)
return st;
//Launch GUI
string sample_name = argv[0];
// Create console view
pt_console_view = move(console_display::make_console_pt_display());
or_console_view = move(console_display::make_console_or_display());
// Create and start remote(Web) view
or_web_view = move(web_display::make_or_web_display(sample_name, 8000, true));
pt_web_view = move(web_display::make_pt_web_display(sample_name, 8000, true));
cout << endl << "-------- Press Esc key to exit --------" << endl << endl;
while (!pt_utils.user_request_exit())
{
//Get next frame
rs::core::correlated_sample_set* sample_set = pt_utils.get_sample_set(colorInfo,depthInfo);
rs::core::correlated_sample_set* sample_set_pt = pt_utils.get_sample_set(colorInfo,depthInfo);
//Increment reference count of images at sample set
for (int i = 0; i < static_cast<uint8_t>(rs::core::stream_type::max); ++i)
{
if (sample_set_pt->images[i] != nullptr)
{
sample_set_pt->images[i]->add_ref();
}
}
//Draw Color frames
auto colorImage = (*sample_set)[rs::core::stream_type::color];
pt_web_view->on_rgb_frame(10, colorImage->query_info().width, colorImage->query_info().height, colorImage->query_data());
//Run OR in a separate thread. Update GUI with the result
if (!is_or_processing_frame) // If we aren't already processing or for a frame:
{
is_or_processing_frame = true;
std::thread recognition_thread(processing_OR, *sample_set,
&impl, or_data, or_configuration);
recognition_thread.detach();
}
//Run Person Tracking
if (ptModule->process_sample_set(*sample_set_pt) != rs::core::status_no_error)
{
cerr << "error : failed to process sample" << endl;
continue;
}
//Update GUI with PT result
pt_console_view->on_person_info_update(ptModule);
pt_web_view->on_PT_tracking_update(ptModule);
}
pt_utils.stop_camera();
actualModuleConfig.projection->release();
return 0;
}
After installing the Realsense SKD, check the realsense_playback_device_sample for how to load the RSSDK capture file.
The short answer is not really. Beside the images that are captured from the other camera, you also need to supply the camera intrinsic and extrinsic settings in order to calculate the depth of and object and call the person tracking module.
I am trying to get interact with the internal memory of the PIC24F16KA101 MCU. After reading the data-sheet and the discussion on this site (which offer a pretty helpful sample code)used in the project
Now if I put the code below the program work just fine, since I am able to read successfully the same value that I wrote previously. However if after writing I unplug the MCU and perform only a read of the EEPROOM it is not going to return the value written. What could be the problem here?. Why can I write and then read successfully but can not read after a power off?.
Thanks in advance to all for the help
Damian
int __attribute__ ((space(eedata))) ee_addr;
void EepSetup();
void EepErase(void);
int EepWrite(int index, int data);
int EepRead(int index);
int main(int argc, char** argv)
{
unsigned int data = 123;
unsigned int data_read = 0;
Init_UART1();
UART1WriteString("START EEPROM PROGRAM \n");
EepSetup();
UART1WriteString("WRITING DATA TO MEMORY \n");
EepWrite(1,data);
//if the code works, just comment the upper section and read eeprom after
//disconecting the power source
UART1WriteString("READING DATA FROM MEMORY \n");
data_read = EepRead(1);
UART1WriteString("Value Read: ");
UART1WriteInt(data_read,16);
UART1WriteString("\n");
__delay_ms(1000);
return (EXIT_SUCCESS);
}
void EepSetup(){
//Disable Interrupts For 5 instructions
asm volatile("disi #5");
//Issue Unlock Sequence
asm volatile("mov #0x55, W0 \n"
"mov W0, NVMKEY \n"
"mov #0xAA, W1 \n"
"mov W1, NVMKEY \n");
}
void EepErase(void) {
NVMCON = 0x4050; // Set up NVMCON to bulk erase the data EEPROM
asm volatile ("disi #5"); // Disable Interrupts For 5 Instructions
__builtin_write_NVM(); // Issue Unlock Sequence and Start Erase Cycle
while(_WR)
;
}
int EepRead(int index){
unsigned int offset;
TBLPAG = __builtin_tblpage(&ee_addr); // Initialize EE Data page pointer
offset = __builtin_tbloffset(&ee_addr); // Initizlize lower word of address
offset += index * sizeof(int);
return __builtin_tblrdl(offset); // read EEPROM data
}
int EepWrite(int index, int data){
unsigned int offset;
NVMCON = 0x4004; // Set up NVMCON to erase one word of data EEPROM
TBLPAG = __builtin_tblpage(&ee_addr); // Initialize EE Data page pointer
offset = __builtin_tbloffset(&ee_addr); // Initizlize lower word of address
offset += index * sizeof(int);
__builtin_tblwtl(offset, data);
asm volatile ("disi #5"); // Disable Interrupts For 5 Instructions
__builtin_write_NVM(); // Issue Unlock Sequence and Start Erase Cycle
while(_WR);
return (EXIT_SUCCESS);
}
I just figured out what the problem was, it happens that if you use the PICkit 3 with MPLABX you have to check an option in the programmer to preserve the EEPROM memory,so the code was functional, you just need to check the option of "Preserve EEPROM Memory" in the programmer settings. I hope this help others.
Cheers, Damian
I am working with an Arduino and Processing with the Arduino library.
I get the error "The function bitWrite(byte, int, int) does not exist.";
it seams that processing + Arduino bitWrite function are not working together.
its raised due to this line:
arduino.bitWrite(data,desiredPin,desiredState);
my goal in this project is modifying a music reactive sketch to work with shift registers.
Here is my full code:
Arduino_Shift_display
import ddf.minim.*;
import ddf.minim.analysis.*;
import processing.serial.*;
import cc.arduino.*;
int displayNum = 8;
Arduino arduino;
//Set these in the order of frequency - 0th pin is the lowest frequency,
//while the final pin is the highest frequency
int[] lastFired = new int[displayNum];
int datapin = 2;
int clockpin = 3;
int latchpin = 4;
int switchpin = 7;
byte data = 0;
//Change these to mess with the flashing rates
//Sensitivity is the shortest possible interval between beats
//minTimeOn is the minimum time an LED can be on
int sensitivity = 75;
int minTimeOn = 50;
String mode;
String source;
Minim minim;
AudioInput in;
AudioPlayer song;
BeatDetect beat;
//Used to stop flashing if the only signal on the line is random noise
boolean hasInput = false;
float tol = 0.005;
void setup(){
// shift register setup
arduino.pinMode(datapin, arduino.OUTPUT);
arduino.pinMode(clockpin, arduino.OUTPUT);
arduino.pinMode(latchpin, arduino.OUTPUT);
arduino.digitalWrite(switchpin, arduino.HIGH);
//Uncomment the mode/source pair for the desired input
//Shoutcast radio stream
//mode = "radio";
//source = "http://scfire-ntc-aa05.stream.aol.com:80/stream/1018";
//mode = "file";
//source = "/path/to/mp3";
mode = "mic";
source = "";
size(512, 200, P2D);
minim = new Minim(this);
arduino = new Arduino(this, Arduino.list()[1]);
minim = new Minim(this);
if (mode == "file" || mode == "radio"){
song = minim.loadFile(source, 2048);
song.play();
beat = new BeatDetect(song.bufferSize(), song.sampleRate());
beat.setSensitivity(sensitivity);
} else if (mode == "mic"){
in = minim.getLineIn(Minim.STEREO, 2048);
beat = new BeatDetect(in.bufferSize(), in.sampleRate());
beat.setSensitivity(sensitivity);
}
}
void shiftWrite(int desiredPin, int desiredState)
// This function lets you make the shift register outputs
// HIGH or LOW in exactly the same way that you use digitalWrite().
// Like digitalWrite(), this function takes two parameters:
// "desiredPin" is the shift register output pin
// you want to affect (0-7)
// "desiredState" is whether you want that output
// to be HIGH or LOW
// Inside the Arduino, numbers are stored as arrays of "bits",
// each of which is a single 1 or 0 value. Because a "byte" type
// is also eight bits, we'll use a byte (which we named "data"
// at the top of this sketch) to send data to the shift register.
// If a bit in the byte is "1", the output will be HIGH. If the bit
// is "0", the output will be LOW.
// To turn the individual bits in "data" on and off, we'll use
// a new Arduino commands called bitWrite(), which can make
// individual bits in a number 1 or 0.
{
// First we'll alter the global variable "data", changing the
// desired bit to 1 or 0:
arduino.bitWrite(data,desiredPin,desiredState);
// Now we'll actually send that data to the shift register.
// The shiftOut() function does all the hard work of
// manipulating the data and clock pins to move the data
// into the shift register:
arduino.shiftOut(datapin, clockpin, MSBFIRST, data);
// Once the data is in the shift register, we still need to
// make it appear at the outputs. We'll toggle the state of
// the latchPin, which will signal the shift register to "latch"
// the data to the outputs. (Latch activates on the high-to
// -low transition).
arduino.digitalWrite(latchpin, arduino.HIGH);
arduino.digitalWrite(latchpin, arduino.LOW);
}
void draw(){
if (mode == "file" || mode == "radio"){
beat.detect(song.mix);
drawWaveForm((AudioSource)song);
} else if (mode == "mic"){
beat.detect(in.mix);
drawWaveForm((AudioSource)in);
}
if (hasInput){ //hasInput is set within drawWaveForm
for (int i=0; i<displayNum-1; i++){
if ( beat.isRange( i+1, i+1, 1) ){
shiftWrite(i, 1);
lastFired[i] = millis();
} else {
if ((millis() - lastFired[i]) > minTimeOn){
shiftWrite(i, 0);
}
}
}
}
} //End draw method
//Display the input waveform
//This method sets 'hasInput' - if any sample in the signal has a value
//larger than 'tol,' there is a signal and the lights should flash.
//Otherwise, only noise is present and the lights should stay off.
void drawWaveForm(AudioSource src){
background(0);
stroke(255);
hasInput = false;
for(int i = 0; i < src.bufferSize() - 1; i++)
{
line(i, 50 + src.left.get(i)*50, i+1, 50 + src.left.get(i+1)*50);
line(i, 150 + src.right.get(i)*50, i+1, 150 + src.right.get(i+1)*50);
if (!hasInput && (abs(src.left.get(i)) > tol || abs(src.right.get(i)) > tol)){
hasInput = true;
}
}
}
void resetPins(){
for (int i=0; i<ledPins.length; i++){
arduino.digitalWrite(ledPins[i], Arduino.LOW);
}
}
void stop(){
resetPins();
if (mode == "mic"){
in.close();
}
minim.stop();
super.stop();
}
BeatListener
class BeatListener implements AudioListener
{
private BeatDetect beat;
private AudioPlayer source;
BeatListener(BeatDetect beat, AudioPlayer source)
{
this.source = source;
this.source.addListener(this);
this.beat = beat;
}
void samples(float[] samps)
{
beat.detect(source.mix);
}
void samples(float[] sampsL, float[] sampsR)
{
beat.detect(source.mix);
}
}
You can achieve the same thing using standard bitwise operators. To turn a bit on:
data |= 1 << bitNumber;
The right-hand side (1 << bitNumber) is a bit-shift operation to create a suitable bit-mask. It takes the single '1' bit and moves it left until it reaches the desired position. The bitwise-or assignment (|=) combines that new bit-mask with the existing bits in data. This turns the desired bit on, but leaves the rest untouched.
The code to turn a bit off is slightly different:
data &= ~(1 << bitNumber);
You can see the same bit-shift operation here. However, it's preceded by the unary negation operator (~). This swaps all the 1's for 0's, and all the 0's for 1's. The result is the exact opposite of the bit-mask we used before. You can't do a bitwise-or operation this time though, or else you'll turn all the other bits on. The bitwise-and assignment (&=) is used instead to combine this mask with the data variable. This ensures the desired bit is turned off, and the rest are untouched.
In your code, desiredPin is the equivalent of bitNumber.
A full explanation of how bitwise operations work can be quite lengthy. I'd recommend looking for a good tutorial online if you need more help with that.
There are also the bitSet and bitClear Arduino macros that make the code a little more readable than bit shifting and using AND and OR. The format is either bitSet(what_to_modify,bit_number) and bitClear(what_to_modify,bit_number). These translate into very efficient code and can be used to manipulate both, variables and hardware registers. So for example, if you wanted to turn on pin 13 on the Arduino UNO, you would first need to look up that Arduino pin 13 is actually pin 5 on PORTB of the Atmel atmega328 chip. So the command would be:
bitSet(PORTB,5);