nacl_io bind fails with EPERM - google-nativeclient

I wrote some demo app, that uses nacl_io sockets,
but bind fails with errno == EPERM
building with pepper_37,
Google Chrome 39.0.2171.95 (m)
OS Windows 7 or Server 2008 R2 SP1 64 bit
PNaCl translator version 0.1.0.13769
chrome flags:
--allow-nacl-socket-api=localhost --no-sandbox --enable-nacl
class ProxyTesterInstance : public pp::Instance
{
public:
explicit ProxyTesterInstance(PP_Instance instance, PPB_GetInterface get_interface) : pp::Instance(instance)
{
nacl_io_init_ppapi(instance, get_interface);
}
virtual ~ProxyTesterInstance() {}
virtual void HandleMessage(const pp::Var& var_message)
{
if (!var_message.is_string())
return;
std::string message = var_message.AsString();
if (message == kStartString)
{
reply(kReplyStartString);
int fd = socket( PF_INET, SOCK_STREAM, 0);
struct sockaddr_in myaddr;
myaddr.sin_family = PF_INET;
myaddr.sin_port = htons(50000);
inet_aton("0.0.0.0", &myaddr.sin_addr );
int res = bind(fd, (struct sockaddr*)&myaddr, sizeof(myaddr)); //returns -1
myaddr.sin_port = htons(80);
inet_aton("173.194.113.2", &myaddr.sin_addr );
res = connect(fd, (struct sockaddr*)&myaddr, sizeof(myaddr)); //returns 0
}

nacl_io assumes that it is being run on a worker thread, not the main thread. This is because many socket functions are blocking, but it is illegal to block the main thread in a NaCl application. Unfortunately, the error messages are not very clear explaining this constraint.
The easiest way to make this code work is to use the ppapi_simple library. It will initialize nacl_io for you and start running your code on a worker thread. At this point, you'll be able to make blocking calls (such as bind). It also gives you a main-like entry point instead of having to create a pp::Instance.
Take a look at some of the demos in the NaCl SDK (e.g. examples/demo/earth, examples/demo/pi_generator) for how to use ppapi_simple.

Related

Getting positions from gpsd in a Qt quick program

I have a computer with a GPS connected to a serial port that is running gpsd with a pretty basic configuration. Here is the contents of /etc/default/gpsd:
START_DAEMON="true"
USBAUTO="false"
DEVICES="/dev/ttyS0"
GPSD_OPTIONS="-n -G"
GPSD_SOCKET="/var/run/gpsd.sock"
With this config, gpsd runs fine and all gpsd client utilities, e.g. cgps, gpspipe, gpsmon, can get data from the GPS.
I am trying to access GPS data from a Qt QML program using the PositionSource element with the following syntax but lat and long show as NaN so it doesn't work:
PositionSource {
id: gpsPos
updateInterval: 500
active: true
nmeaSource: "socket://localhost:2947"
onPositionChanged: {
myMap.update( gpsPos.position )
}
}
I tried piping the NMEA data from the GPS to another port using gpspipe -r | nc -l 6000 and specifying nmeaSource: "socket://localhost:6000 and everything works fine!
How do I make Qt talk to gpsd directly?
After tinkering (i.e. compiling from source, installing, configuring, testing, etc.) with gps-share, Gypsy, geoclue2, serialnmea and other ways to access data from a GPS connected to a serial port (thanks to Pa_ for all the suggestions), but all with no results while gpsd was working perfectly for other apps, I decided to make Qt support gpsd by making a very crude change to the QDeclarativePositionSource class to implement support for a gpsd scheme in the URL for the nmeaSource property. With this change, a gpsd source can now be defined as nmeaSource: "gpsd://hostname:2947" (2947 is the standard gpsd port).
The changed code is shown below. I would suggest this should be added to Qt at some point but in the meantime, I guess I need to derive this class to implement my change in a new QML component but, being new to QML, I have no idea how that is done. I suppose it would also probably be a good idea to stop and start the NMEA stream from gpsd based on the active property of the PositionSource item... I will get to it at some point but would appreciate pointers on how to do this in a more elegant way.
void QDeclarativePositionSource::setNmeaSource(const QUrl &nmeaSource)
{
if ((nmeaSource.scheme() == QLatin1String("socket") )
|| (nmeaSource.scheme() == QLatin1String("gpsd"))) {
if (m_nmeaSocket
&& nmeaSource.host() == m_nmeaSocket->peerName()
&& nmeaSource.port() == m_nmeaSocket->peerPort()) {
return;
}
delete m_nmeaSocket;
m_nmeaSocket = new QTcpSocket();
connect(m_nmeaSocket, static_cast<void (QTcpSocket::*)(QAbstractSocket::SocketError)> (&QAbstractSocket::error),
this, &QDeclarativePositionSource::socketError);
connect(m_nmeaSocket, &QTcpSocket::connected,
this, &QDeclarativePositionSource::socketConnected);
// If scheme is gpsd, NMEA stream must be initiated by writing a command
// on the socket (gpsd WATCH_ENABLE | WATCH_NMEA flags)
// (ref.: gps_sock_stream function in gpsd source file libgps_sock.c)
if( nmeaSource.scheme() == QLatin1String("gpsd")) {
m_nmeaSocket->connectToHost(nmeaSource.host(),
nmeaSource.port(),
QTcpSocket::ReadWrite);
char const *gpsdInit = "?WATCH={\"enable\":true,\"nmea\":true}";
m_nmeaSocket->write( gpsdInit, strlen(gpsdInit);
} else {
m_nmeaSocket->connectToHost(nmeaSource.host(), nmeaSource.port(), QTcpSocket::ReadOnly);
}
} else {
...

How to use another event loop in win32 gui application

I am new to win32 api progamming, and I am tring writing a xmpp client for windows platform, using win32 api and gloox xmpp library. gloox has its own event loop, while windows GUI has message loop too. I am not very clear how to use these two loops together.
From the gloox document:
Blocking vs. Non-blocking Connections
For some kind of bots a blocking connection (the default behaviour) is ideal. All the bot does is react to events coming from the server. However, for end user clients or anything with a GUI this is far from perfect.
In these cases non-blocking connections can be used. If ClientBase::connect( false ) is called, the function returnes immediately after the connection has been established. It is then the resposibility of the programmer to initiate receiving of data from the socket.
The easiest way is to call ClientBase::recv() periodically with the desired timeout (in microseconds) as parameter. The default value of -1 means the call blocks until any data was received, which is then parsed automatically.
Window message loop:
while (GetMessage(&msg, NULL, 0, 0))
{
TranslateMessage(&msg);
DispatchMessage(&msg);
}
return msg.wParam;
Window proc:
LRESULT CALLBACK WndProc(HWND hWnd, UINT msg, WPARAM wParam, LPARAM lParam)
{
TCHAR str[100];
StringCbPrintf(str, _countof(str), TEXT("Message ID:%-6x:%s"), msg, GetStringMessage(msg));
OutputDebugString(str);
HDC hdc;
PAINTSTRUCT ps;
RECT rect;
switch (msg)
{
case WM_CREATE:
return 0;
case WM_PAINT:
hdc = BeginPaint(hWnd, &ps);
GetClientRect(hWnd, &rect);
DrawText(hdc, TEXT("DRAW TEXT ON CLIENT AREA"), -1, &rect, DT_CENTER | DT_SINGLELINE | DT_VCENTER);
EndPaint(hWnd, &ps);
return 0;
case WM_DESTROY:
PostQuitMessage(0);
return 0;
default:
break;
}
return DefWindowProc(hWnd, msg, wParam, lParam);
}
gloox blocking connection
JID jid( "jid#server/resource" );
Client* client = new Client( jid, "password" );
client->registerConnectionListener( this );
client->registerPresenceHandler( this );
client->connect();// here will enter event loop
gloox non-blocking connection
Client* client = new Client( ... );
ConnectionTCPClient* conn = new ConnectionTCPClient( client, client->logInstance(), server, port );
client->setConnectionImpl( conn );
client->connect( false );
int sock = conn->socket();
[...]
I am not very clear how can I
call ClientBase::recv() periodically with the desired timeout (in microseconds) as parameter
With a timer ? or multi thread programming ? or there is a better solution ?
Any suggestions appreciated
Thank you
The best IO strategy for that is overlapped IO. Unfortunately, the method is windows only, not supported by the cross-platform library you’ve picked.
You can use SetTimer() API, and periodically call recv() method of the library with zero timeout, in WM_TIMER handler. This will introduce extra latency (your PC receives a message but it has to wait for the next timer event to handle it), or if you’ll use small intervals like 20 ms, will consume battery on laptops or tablets.
You can use blocking API with a separate thread. More efficient performance-wise, but harder to implement, you’ll have to marshal messages and other events to the GUI thread. WM_USER+n custom windows messages is usually the best way to do that, BTW.

gnu.io.PortInUseException: Unknown Application?

void connect ( String portName ) throws Exception
{
CommPortIdentifier portIdentifier = CommPortIdentifier.getPortIdentifier(portName);
if ( portIdentifier.isCurrentlyOwned() )
{
System.out.println("Error: Port is currently in use");
}
else
{
System.out.println(portIdentifier.getCurrentOwner());
CommPort commPort = portIdentifier.open(this.getClass().getName(),2000);
if ( commPort instanceof SerialPort )
{
SerialPort serialPort = (SerialPort) commPort;
serialPort.setSerialPortParams(115200,SerialPort.DATABITS_8,SerialPort.STOPBITS_1,SerialPort.PARITY_NONE);
InputStream in = serialPort.getInputStream();
OutputStream out = serialPort.getOutputStream();
(new Thread(new SerialReader(in))).start();
(new Thread(new SerialWriter(out))).start();
}
else
{
System.out.println("Error: Only serial ports are handled by this example.");
}
}
}
is giving
gnu.io.PortInUseException: Unknown Application
at gnu.io.CommPortIdentifier.open(CommPortIdentifier.java:354)
i am using RXTX with Java in windows 7 home 64-bit.
Check that /var/lock folder exist on your machine.
mkdir /var/lock
chmod go+rwx /var/lock
Reboot the system / disable the port.
Actual problem is when the program runs port is opened and it didn't close after the program terminates.
it works.
I ran into this problem because the port was actually in use. A previous instance of javaw.exe appeared in the Windows task manager, it hogged the port.
The reason why that previous java process hung was a hardware issue: When plugging the USB-2-serial converter that I happened to use into a USB-2 port, all worked fine. When plugged into a USB-3 port, RXTX CommPortIdentifier code would hang, and then subsequent instances of Java received the PortInUseException.
I used Process Explorer to find a process with the handle \Device\PCISerial0 and closed the handle. If your com ports aren't on a PCI card, the name might be different.
For Windows
Open Task Manager
under Eclipse (or your ide) find Java application.
Right click on it -> End Task
May be useful, I solved such problem by remove gateway from service and stop it , gateway is instance of SerialModemGateway.
Service.getInstance().stopService();
Service.getInstance().removeGateway(gateway);
gateway.stopGateway();

RInside: parseEvalQ 'Parse Error' causes each subsequent call to parseEvalQ to give a 'Parse Error' even if exception handled

My code, which tries to emulate an R shell via C++, allows a user to send R commands over a tcp connection which are then passed to the R instance through the RInside::parseEvalQ function, during runtime. I have to be able to handle badly formatted commands. Whenever a bad command is given as an argument to parseEvalQ I catch the runtime error thrown (looking at RInside.cpp my specific error is flagged with 'PARSE_ERROR' 'status' within the parseEval(const string&, SEXP) function), what() gives a "St9exception" exception.
I have two problems, the first more pressing than the second:
1a . After an initial Parse Error any subsequent call to parseEvalQ results in another Parse Error even if the argument is valid. Is the embedded R instance being corrupted in some way by the parse error?
1b . The RInside documentation recommends using Rcpp::Evaluator::run to handle R exceptions in C++ (which I suspect are being thrown somewhere within the R instance during the call to parseEval(const string&, SEXP), before it returns the error status 'PARSE_ERROR'). I have experimented trying to use this but can find no examples on the web of how to practically use Rcpp::Evaluator::run.
2 . In my program I re-route stdout and stderr (at C++ level) to the file descriptor of my tcp connection, any error messages from the RInside instance get sent to the console, however regular output does not. I send RInside the command 'sink(stderr(), type="output")' in an effort to re-route stdout to stderr (as stderr appears to be showing up in my console) but regular output is still not shown. 'print(command)' works but i'd like a cleaner way of passing stdout straight to the console as in a normal R shell.
Any help and/or thoughts would be much appreciated. A distilled version of my code is shown below:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
using namespace std;
string request_cpp;
ostringstream oss;
int read(FILE* tcp_fd)
{
/* function to read input from FILE* into the 'request_cpp' string */
}
int write(FILE* tcp_fd, const string& response)
{
/* function to write a string to FILE* */
}
int main(int argc, char* argv[])
{
// create RInside object
RInside R(argc,argv);
//socket
int sd = socket(PF_INET, SOCK_STREAM, 0);
addr.sin_family = AF_INET;
addr.sin_port = htons(40650);
// set and accept connection on socket
inet_pton(AF_INET, "127.0.0.1", &addr.sin_addr);
bind(sd,(struct sockaddr*)&addr, sizeof(addr));
listen(sd,1);
int sd_i = accept(sd, 0, 0);
//re-route stdout and stderr to socket
close(1);
dup(sd_i);
close(2);
dup(sd_i);
// open read/write file descriptor to socket
FILE* fp = fdopen(sd_i,"r+");
// emulate R prompt
write(fp,"> ");
// (attempt to) redirect R's stdout to stderr
R.parseEvalQ("sink(stderr(),type=\"output\");");
// read from socket and pass commands to RInside
while( read(fp) )
{
try
{
// skip empty input
if(request_cpp == "")
{
write(fp, "> ");
continue;
}
else if(request_cpp == "q()")
{
break;
}
else
{
// clear string stream
oss.str("");
// wrap command in try
oss << "try(" << request_cpp << ");" << endl;
// send command
R.parseEvalQ(oss.str());
}
}
catch(exception e)
{
// print exception to console
write(fp, e.what());
}
write(fp, "> ");
}
fclose(fp);
close(sd_i);
exit(0);
}
I missed this weeks ago as you didn't use the 'r' tag.
Seems like you are re-implementing Simon's trusted rserver. Why not use that directly?
Otherwise, for Rcpp question, consider asking on our rcpp-devel list.

Passing a QLocalSocket* to a method expecting a QIODevice*

Perhaps I'm being over-ambitious, but I'm trying to write a server program which can accept connections over both QLocalSockets and QTcpSockets. The concept is to have a 'nexus' object with both a QLocalServer and QTcpServer listening for new connections:
Nexus::Nexus(QObject *parent)
: QObject(parent)
{
// Establish a QLocalServer to deal with local connection requests:
localServer = new QLocalServer;
connect(localServer, SIGNAL(newConnection()),
this, SLOT(newLocalConnection()));
localServer -> listen("CalculationServer");
// Establish a UDP socket to deal with discovery requests:
udpServer = new QUdpSocket(this);
udpServer -> bind(QHostAddress::Any, SERVER_DISCOVERY_PORT);
connect(udpServer, SIGNAL(readyRead()),
this, SLOT(beDiscovered()));
// Establish a QTcpServer to deal with remote connection requests:
tcpServer = new QTcpServer;
connect(tcpServer, SIGNAL(newConnection()),
this, SLOT(newTcpConnection()));
tcpServer -> listen(QHostAddress::Any, SERVER_COMMAND_PORT);
}
... and then separate slots which establish a server object, whose constructor takes a pointer to a QIODevice. In theory, this ought to work because both QLocalSocket and QTcpSocket inherit QIODevice. Here is the newLocalConnection slot, for example:
void Nexus::newLocalConnection()
{
// Create a new CalculationServer connected to the newly-created local socket:
serverList.append(new CalculationServer(localServer -> nextPendingConnection()));
// We don't allow more than one local connection, so stop listening on the server:
localServer -> close();
}
The problem is that this won't compile, giving an error:
error C2664:
'CalculationServer::CalculationServer(QIODevice
*,QObject *)' : cannot convert parameter 1 from 'QLocalSocket *' to
'QIODevice *' 1> Types pointed
to are unrelated; conversion requires
reinterpret_cast, C-style cast or
function-style cast
Now the types pointed to are clearly not unrelated, and elsewhere in my code I have no problems at all with actions like:
QLocalSocket *socket = new QLocalSocket;
QIODevice *server = new QIODevice;
server = socket;
... so can anyone tell me why the compiler has a problem with this? Is there a way that I can make the constructor accept the QLocalServer*? I suppose there is the nuclear option of getting the constructor to take a void pointer plus an extra variable to tell it what it's being sent, so it can then recast the void pointer to either a QLocalSocket or QTcpSocket, but I feel uncomfortable resorting to reinterpret_cast on what looks like it ought to be a straightforward bit of C++ polymorphism.
Regards,
Stephen.
The most likely reason is that you have forgotten to #include <QLocalSocket> in the source file where the error occurs.

Resources