Wrong SHA-1 hash with STM32 Crypto Library - encryption

With TrueStudio, I'm developing on the STM32f103RB with the STM32 Crypto Library Package 'STM32CubeExpansion_Crypto_V3.1.0'. I would like to use the sha-1 from lib but from some reason I don't get the correct result.
Here is my test...
My input buffer is: "('1543409074.11', '1702635382a7b4243308035dfecc1e5e31678356bdfa39f92b6409a2')"
From SHA1 and other hash functions online generator, result for sha1: c6818ce06b79c91cda7cc89f1af243e3d1373c1f
With the STM32 Crypto Library, I can't seem to generate the correct SHA-1 sum.
For instance, I call SHA-1 hash function with the following code :
SHA1ctx_stt SHA1ctx_st; // The SHA1 context
membuf_stt mb_st; // structure that will contain the preallocated buffer
uint8_t Digest[CRL_SHA1_SIZE]; // Buffer that will contain the SHA-1 digest of the message
uint8_t preallocated_buffer[4096]; // buffer required for internal allocation of memory
int32_t status = HASH_SUCCESS;
int32_t outputSize;
const char* Message="('1543409074.11', '1702635382a7b4243308035dfecc1e5e31678356bdfa39f92b6409a2')";
int32_t MessageSize = strlen(Message);
// Initialize the membuf_st that must be passed to the ECC functions
mb_st.mSize = sizeof(preallocated_buffer);
mb_st.mUsed = 0;
mb_st.pmBuf = preallocated_buffer;
//Initialize it the SHA-1 Context
SHA1ctx_st.mFlags = E_HASH_DEFAULT;
// 20 byte of output
SHA1ctx_st.mTagSize = CRL_SHA1_SIZE;
// Init SHA-1
status = SHA1_Init(&SHA1ctx_st);
if (status == HASH_SUCCESS)
{
// Process the message with SHA-1
status = SHA1_Append(&SHA1ctx_st, (const uint8_t *)Message, MessageSize);
if (status == HASH_SUCCESS)
{
// Output the Digest
status = SHA1_Finish(&SHA1ctx_st, Digest, &outputSize);
if (status == HASH_SUCCESS)
{
// It's OK, but result in Digest isn't correct
}
}
}
What am I missing? Does anyone know what might be wrong?
Thanks,

I found the solution.
ST has historically locked these libraries to the STM32 parts by probing the CRC peripheral, basically a challenge-response test.
To do, it is necessary to enable RCC_CRC_CLK. With CubeMX, you need to activate "CRC Mode and Configuration" in Computing Tab.

Related

Reading Armv8-A registers with devmem from GNU/Linux shell

I want to read the values of some Cortex-A53 registers, such as
D_AA64ISAR0_EL1 (AArch64)
ID_ISAR5 (Aarch32)
ID_ISAR5_EL1 (Aarch64)
Unfortunately, I lack a little embedded/assembly experience. The documentation reveals
To access the ID_AA64ISAR0_EL1:
MRS , ID_AA64ISAR0_EL1 ; Read ID_AA64ISAR0_EL1 into Xt
ID_AA64ISAR0_EL1[31:0] can be accessed through the internal memory-mapped interface
and the external debug interface, offset 0xD30.
I decided to utilize devmem2 on my target (since busybox does not include the devmem applet). Is the following procecure correct to read the register?
devmem2 0xD30
The part which I am unsure about is using the "offset" as a direct physical address. If it is the actual address, why call if "offset" and not "address". If it's an offset, what is the base address? I am 99% certain this is not the correct procedure, but how do I know the base address to add the offset to? I have searched the Armv8 technical reference manual and A53 MPCore documents to no avail. The explain the register contents in detail but seem to assume you read them from ASM using the label ID_AA64ISAR0_EL1.
Update:
I found this:
Configuration Base Address Register, EL1
The CBAR_EL1 characteristics are:
Purpose Holds the physical base address of the memory-mapped GIC CPU
interface registers.
But it simply duplicates my problem, how to read this other register?
Update 2:
The first update seems relevant only for GIC and not for configuration registers I am trying to read (I misunderstood the information I think).
For the specific problem at hand (checking crypto extension availability) one may simply cat /proc/cpuinfo and look for aes/sha etc.
Update 3:
I am now investigating http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0176c/ar01s04s01.html, as well as the base address being SoC specific and thus may be found in the reference manual of the SoC.
Update 4:
Thanks to the great answer I seem to be able to read data via my kernel module:
[ 4943.461948] ID_AA64ISA_EL1 : 0x11120
[ 4943.465775] ID_ISAR5_EL1 : 0x11121
P.S.: This was very insightful, thank you again!
Update 5:
Source code as per request:
/******************************************************************************
*
* Copyright (C) 2011 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See
* the GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*
*****************************************************************************/
#include <linux/module.h>
#include <linux/types.h>
/*****************************************************************************/
// read system register value ID_AA64ISAR0_EL1 (s3_0_c0_c6_0).
static inline uint64_t system_read_ID_AA64ISAR0_EL1(void)
{
uint64_t val;
asm volatile("mrs %0, ID_AA64ISAR0_EL1" : "=r" (val));
return val;
}
// read system register value ID_ISAR5_EL1 (s3_0_c0_c2_5).
static inline uint64_t system_read_ID_ISAR5_EL1(void)
{
uint64_t val;
asm volatile("mrs %0, s3_0_c0_c2_5" : "=r" (val));
return val;
}
/*****************************************************************************/
int init_module(void)
{
printk("ramdump Hello World!\n");
printk("ID_AA64ISAR0_EL1 : 0x%llX\n", system_read_ID_AA64ISAR0_EL1());
printk("ID_ISAR5_EL1 : 0x%llX\n", system_read_ID_ISAR5_EL1());
return 0;
}
void cleanup_module(void)
{
printk("ramdump Goodbye Cruel World!\n");
}
MODULE_LICENSE("GPL");
Disclaimer: I am not an Aarch64 expert, but I am currently learning about the architecture and have read a bit.
You cannot read ID_AA64ISAR0_EL1, ID_ISAR5_EL1 nor ID_ISAR5 from a user-mode application running at EL0: the _EL1 suffix means than running at least at EL1 is required in order to be allowed to read those two registers.
You may find helpful to read the pseudo-code in the arm documentation here and here.
In the case of ID_ISAR5 for example, the pseudo-code is very explicit:
if PSTATE.EL == EL0 then
UNDEFINED;
elsif PSTATE.EL == EL1 then
if EL2Enabled() && !ELUsingAArch32(EL2) && HSTR_EL2.T0 == '1' then
AArch64.AArch32SystemAccessTrap(EL2, 0x03);
elsif EL2Enabled() && ELUsingAArch32(EL2) && HSTR.T0 == '1' then
AArch32.TakeHypTrapException(0x03);
elsif EL2Enabled() && !ELUsingAArch32(EL2) && HCR_EL2.TID3 == '1' then
AArch64.AArch32SystemAccessTrap(EL2, 0x03);
elsif EL2Enabled() && ELUsingAArch32(EL2) && HCR.TID3 == '1' then
AArch32.TakeHypTrapException(0x03);
else
return ID_ISAR5;
elsif PSTATE.EL == EL2 then
return ID_ISAR5;
elsif PSTATE.EL == EL3 then
return ID_ISAR5;
One easy way to read those register would be to write a tiny loadable kernel module you could call from your user-mode application: Since the Linux kernel is running at EL1, it is perfectly able to read those three registers.
See for example this article for a nice introduction to Linux loadable kernel modules.
And this is likely that an application running at EL0 cannot access memory-mapped registers accessible only from EL1, since this would obviously break the protection scheme.
The C code snippets required to read those registers in Aarch64 state would be (tested with gcc-arm-9.2-2019.12-x86_64-aarch64-none-linux-gnu) :
#include <stdint.h>
// read system register value ID_AA64ISAR0_EL1 (s3_0_c0_c6_0).
static inline uint64_t system_read_ID_AA64ISAR0_EL1(void)
{
uint64_t val;
asm volatile("mrs %0, s3_0_c0_c6_0" : "=r" (val));
return val;
}
// read system register value ID_ISAR5_EL1 (s3_0_c0_c2_5).
static inline uint64_t system_read_ID_ISAR5_EL1(void)
{
uint64_t val;
asm volatile("mrs %0, s3_0_c0_c2_5" : "=r" (val));
return val;
}
Update #1:
The GCC toolchain does not understand all arm system register names, but can nevertheless properly encode system registers access instructions if specified which exact values of the coproc, opc1, CRn, CRm, and opc2 fields are associated to this register.
In the case of ID_AA64ISAR0_EL1, the values specified in the Arm® Architecture Registers Armv8, for Armv8-A architecture profile document are:
coproc=0b11, opc1=0b000, CRn=0b0000, CRm=0b0110, opc2=0b000
The system register alias would then be s[coproc]_[opc1]_c[CRn]_c[CRm]_[opc2], that is s3_0_c0_c6_0 in the case of ID_AA64ISAR0_EL1.
I see this tool maybe meets your need:
system-register-tools
It just provides reading and writing system registers function for arm64, likes MSR-tools in x86.

Using ZeroMQ ZMQ_STREAM to be a tcp client. Why am I receiving extra info?

I have an application that uses ZeroMQ for various things and I want to also use it as a tcp-client for other external connections.
Currently if the external tcp-server sends data, the client receives 5 byte id, 0 bytes, 5 bytes, and then actual message.
How do I get ZeroMQ not to send this stuff?
#include <iostream>
#include <string>
#include <zmq.h>
#include <cstring>
#include <assert.h>
#include <chrono>
#include <thread>
int main()
{
void *mpSocketContext = zmq_ctx_new();
/* Create ZMQ_STREAM socket */
void *mpSerialSocket = zmq_socket(mpSocketContext, ZMQ_STREAM);
void *mpSocket = mpSerialSocket;
bool aeBlocking = true;
std::string asAddress = "127.0.0.1:1236";
asAddress = "tcp://" + asAddress;
std::cout << "tcSerialServerPort::tcSerialServerPort: connecting to " << asAddress << std::endl;
int rc = zmq_connect(mpSerialSocket, asAddress.c_str());
if (rc != 0)
std::cout << "ZMQ ERROR: zmq_connect " << zmq_strerror(zmq_errno()) << std::endl;
uint8_t id [256];
size_t id_size = 256;
rc = zmq_getsockopt (mpSerialSocket, ZMQ_IDENTITY, id, &id_size);
assert(rc == 0);
while(true)
{
zmq_msg_t msg;
zmq_msg_init(&msg);
size_t lnBytesReceived = 0;
std::string lsStr;
lnBytesReceived = zmq_recvmsg(mpSocket, &msg, aeBlocking ? 0 : ZMQ_DONTWAIT);
lsStr = std::string(static_cast<const char*>(zmq_msg_data(&msg)),
zmq_msg_size(&msg));
std::cout << "Received Bytes=" << lsStr.size() << " Data=" << lsStr << std::endl;
zmq_msg_close(&msg);
std::this_thread::sleep_for(std::chrono::seconds(1));
}
zmq_close(mpSerialSocket);
zmq_ctx_destroy(mpSocketContext);
return 0;
}
Step #1: Don't panic.
It is very easy - either stop using ZeroMQ, or start to design things compatible with the published ZeroMQ API documentation. Seeking a third way is still possible, but one may easily guess what such a fork-project will finish in.
Best let's start re-reading the design rules from the API:
"Why am I receiving extra info?" The ZeroMQ published API says:
Native pattern
The native pattern is used for communicating with TCP peers and allows asynchronous requests and replies in either direction.
ZMQ_STREAM
A socket of type ZMQ_STREAM is used to send and receive TCP data from a non-ØMQ peer, when using the tcp:// transport. A ZMQ_STREAM socket can act as client and/or server, sending and/or receiving TCP data asynchronously.
When receiving TCP data, a ZMQ_STREAM socket shall prepend a message part containing the identity of the originating peer to the message before passing it to the application. Messages received are fair-queued from among all connected peers.
When sending TCP data, a ZMQ_STREAM socket shall remove the first part of the message and use it to determine the identity of the peer the message shall be routed to, and unroutable messages shall cause an EHOSTUNREACH or EAGAIN error.
To open a connection to a server, use the zmq_connect call, and then fetch the socket identity using the ZMQ_IDENTITY zmq_getsockopt call.
To close a specific connection, send the identity frame followed by a zero-length message (see EXAMPLE section).
When a connection is made, a zero-length message will be received by the application. Similarly, when the peer disconnects (or the connection is lost), a zero-length message will be received by the application.
You must send one identity frame followed by one data frame. The ZMQ_SNDMORE flag is required for identity frames but is ignored on data frames.
The rest is obvious, follow the API documented behaviour in the user-code and all the ZeroMQ things work as charm.

nacl_io bind fails with EPERM

I wrote some demo app, that uses nacl_io sockets,
but bind fails with errno == EPERM
building with pepper_37,
Google Chrome 39.0.2171.95 (m)
OS Windows 7 or Server 2008 R2 SP1 64 bit
PNaCl translator version 0.1.0.13769
chrome flags:
--allow-nacl-socket-api=localhost --no-sandbox --enable-nacl
class ProxyTesterInstance : public pp::Instance
{
public:
explicit ProxyTesterInstance(PP_Instance instance, PPB_GetInterface get_interface) : pp::Instance(instance)
{
nacl_io_init_ppapi(instance, get_interface);
}
virtual ~ProxyTesterInstance() {}
virtual void HandleMessage(const pp::Var& var_message)
{
if (!var_message.is_string())
return;
std::string message = var_message.AsString();
if (message == kStartString)
{
reply(kReplyStartString);
int fd = socket( PF_INET, SOCK_STREAM, 0);
struct sockaddr_in myaddr;
myaddr.sin_family = PF_INET;
myaddr.sin_port = htons(50000);
inet_aton("0.0.0.0", &myaddr.sin_addr );
int res = bind(fd, (struct sockaddr*)&myaddr, sizeof(myaddr)); //returns -1
myaddr.sin_port = htons(80);
inet_aton("173.194.113.2", &myaddr.sin_addr );
res = connect(fd, (struct sockaddr*)&myaddr, sizeof(myaddr)); //returns 0
}
nacl_io assumes that it is being run on a worker thread, not the main thread. This is because many socket functions are blocking, but it is illegal to block the main thread in a NaCl application. Unfortunately, the error messages are not very clear explaining this constraint.
The easiest way to make this code work is to use the ppapi_simple library. It will initialize nacl_io for you and start running your code on a worker thread. At this point, you'll be able to make blocking calls (such as bind). It also gives you a main-like entry point instead of having to create a pp::Instance.
Take a look at some of the demos in the NaCl SDK (e.g. examples/demo/earth, examples/demo/pi_generator) for how to use ppapi_simple.

WebRTC SRTP decryption

I am trying to build an SRTP to RTP stream converter and I am having issues getting the Master Key from the WebRTC peerconnection I am creating.
From what I understand, with a DES exchange, the key is exchange via the SDP exchange and is shown in the a=crypto field. So, this situation seems pretty straight forward(please correct me if I am wrong), but ultimately useless as WebRTC standardization is now demanding that DES should not be used(only Chrome supports it now and it may be removed in the future).
For DTLS there is the fingerprint field in the SDP, is that a hash of the certificate desired to be utilized in the future exchange?[EDIT: After doing some reading, I am thinking that that is not the case] I would think with knowledge of the fingerprint along side the ability to parse through the DTLS packets in the exchange I should be able to grab the Master Key to decode the SRTP stream, but I am hitting a wall as I do not know where to look or even 100% sure if it is possible.
So, in short, is it even feasible(without getting into the lower C++ API and creating my own implementation of WebRTC) to decode the SRTP feed that is created with a WebRTC PeerConnection in Chrome and FireFox(possibly through packet sniffing with the information gleaned from the SDP exchange)?[EDIT: depressingly, it seems that access to the private part of the key(aka, the master key) is not possible...please correct if I am wrong]
tHere is some code using openssl and libsrtp native api
#define SRTP_MASTER_KEY_KEY_LEN 16
#define SRTP_MASTER_KEY_SALT_LEN 14
static void dtls_srtp_init( struct transport_dtls *dtls )
{
/*
When SRTP mode is in effect, different keys are used for ordinary
DTLS record protection and SRTP packet protection. These keys are
generated using a TLS exporter [RFC5705] to generate
2 * (SRTPSecurityParams.master_key_len +
SRTPSecurityParams.master_salt_len) bytes of data
which are assigned as shown below. The per-association context value
is empty.
client_write_SRTP_master_key[SRTPSecurityParams.master_key_len];
server_write_SRTP_master_key[SRTPSecurityParams.master_key_len];
client_write_SRTP_master_salt[SRTPSecurityParams.master_salt_len];
server_write_SRTP_master_salt[SRTPSecurityParams.master_salt_len];
*/
int code;
err_status_t err;
srtp_policy_t policy;
char dtls_buffer[SRTP_MASTER_KEY_KEY_LEN * 2 + SRTP_MASTER_KEY_SALT_LEN * 2];
char client_write_key[SRTP_MASTER_KEY_KEY_LEN + SRTP_MASTER_KEY_SALT_LEN];
char server_write_key[SRTP_MASTER_KEY_KEY_LEN + SRTP_MASTER_KEY_SALT_LEN];
size_t offset = 0;
/*
The exporter label for this usage is "EXTRACTOR-dtls_srtp". (The
"EXTRACTOR" prefix is for historical compatibility.)
RFC 5764 4.2. Key Derivation
*/
const char * label = "EXTRACTOR-dtls_srtp";
SRTP_PROTECTION_PROFILE * srtp_profile= SSL_get_selected_srtp_profile( dtls->ssl );
/* SSL_export_keying_material exports a value derived from the master secret,
* as specified in RFC 5705. It writes |olen| bytes to |out| given a label and
* optional context. (Since a zero length context is allowed, the |use_context|
* flag controls whether a context is included.)
*
* It returns 1 on success and zero otherwise.
*/
code = SSL_export_keying_material(dtls->ssl,
dtls_buffer,
sizeof(dtls_buffer),
label,
strlen( label),
NULL,
0,
PJ_FALSE);
memcpy(&client_write_key[0], &dtls_buffer[offset], SRTP_MASTER_KEY_KEY_LEN);
offset += SRTP_MASTER_KEY_KEY_LEN;
memcpy(&server_write_key[0], &dtls_buffer[offset], SRTP_MASTER_KEY_KEY_LEN);
offset += SRTP_MASTER_KEY_KEY_LEN;
memcpy(&client_write_key[SRTP_MASTER_KEY_KEY_LEN], &dtls_buffer[offset], SRTP_MASTER_KEY_SALT_LEN);
offset += SRTP_MASTER_KEY_SALT_LEN;
memcpy(&server_write_key[SRTP_MASTER_KEY_KEY_LEN], &dtls_buffer[offset], SRTP_MASTER_KEY_SALT_LEN);
switch( srtp_profile->id )
{
case SRTP_AES128_CM_SHA1_80:
crypto_policy_set_aes_cm_128_hmac_sha1_80(&policy.rtp);
crypto_policy_set_aes_cm_128_hmac_sha1_80(&policy.rtcp);
break;
case SRTP_AES128_CM_SHA1_32:
crypto_policy_set_aes_cm_128_hmac_sha1_32(&policy.rtp); // rtp is 32,
crypto_policy_set_aes_cm_128_hmac_sha1_80(&policy.rtcp); // rtcp still 80
break;
default:
assert(0);
}
policy.ssrc.value = 0;
policy.next = NULL;
/* Init transmit direction */
policy.ssrc.type = ssrc_any_outbound;
policy.key = client_write_key;
err = srtp_create(&dtls->srtp_ctx_rx, &policy);
if (err != err_status_ok) {
printf("not working\n");
}
/* Init receive direction */
policy.ssrc.type = ssrc_any_inbound;
policy.key = server_write_key;
err = srtp_create(&dtls->srtp_ctx_tx, &policy);
if (err != err_status_ok) {
printf("not working\n");
}
}
I found 'SSL_export_keying_material'
Which can take a key from SSL mechanism (after DTLS handshake) and use it for SRTP.
I am not an expert, Just hitting the wall like you...
It's not clear if this is your case, but note it's not possible to access the audio/video from (i.e.:unencrypt) the SRTP merely being a passive observer - that's the whole point of having transport encryption.
The protocol (DTLS-SRTP) works roughly like this:
each browser has a unique keypair, usually generated on installation time
The fingerprint of the public part of the keypair of each side in included in the SDP, in the offer and answer.
Both ends negotiate a DTLS connection, through a ordinary DTLS handshake, thus deriving a kind of session key, which is used to secure the (DTLS) connection
The derived session key is used as the SRTP key
If you don't have access to at least one of the private parts of the keypairs, it's not possible at all to decrypt the connection. If the endpoints choose to use a Diffie-Hellman key exchange on the handshake, a passive attacker will not be able to get the derived key, even with access to both private keys. This property is called forward secrecy.
The only reliable way of accessing the SRTP contents is doing the handshake yourself, implementing a active MITM (changing the fingerprints on the SDP) or getting the private key from the browser and restricting DH key-exchange (which, AFAIK, is not possible at all)

RInside: parseEvalQ 'Parse Error' causes each subsequent call to parseEvalQ to give a 'Parse Error' even if exception handled

My code, which tries to emulate an R shell via C++, allows a user to send R commands over a tcp connection which are then passed to the R instance through the RInside::parseEvalQ function, during runtime. I have to be able to handle badly formatted commands. Whenever a bad command is given as an argument to parseEvalQ I catch the runtime error thrown (looking at RInside.cpp my specific error is flagged with 'PARSE_ERROR' 'status' within the parseEval(const string&, SEXP) function), what() gives a "St9exception" exception.
I have two problems, the first more pressing than the second:
1a . After an initial Parse Error any subsequent call to parseEvalQ results in another Parse Error even if the argument is valid. Is the embedded R instance being corrupted in some way by the parse error?
1b . The RInside documentation recommends using Rcpp::Evaluator::run to handle R exceptions in C++ (which I suspect are being thrown somewhere within the R instance during the call to parseEval(const string&, SEXP), before it returns the error status 'PARSE_ERROR'). I have experimented trying to use this but can find no examples on the web of how to practically use Rcpp::Evaluator::run.
2 . In my program I re-route stdout and stderr (at C++ level) to the file descriptor of my tcp connection, any error messages from the RInside instance get sent to the console, however regular output does not. I send RInside the command 'sink(stderr(), type="output")' in an effort to re-route stdout to stderr (as stderr appears to be showing up in my console) but regular output is still not shown. 'print(command)' works but i'd like a cleaner way of passing stdout straight to the console as in a normal R shell.
Any help and/or thoughts would be much appreciated. A distilled version of my code is shown below:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
using namespace std;
string request_cpp;
ostringstream oss;
int read(FILE* tcp_fd)
{
/* function to read input from FILE* into the 'request_cpp' string */
}
int write(FILE* tcp_fd, const string& response)
{
/* function to write a string to FILE* */
}
int main(int argc, char* argv[])
{
// create RInside object
RInside R(argc,argv);
//socket
int sd = socket(PF_INET, SOCK_STREAM, 0);
addr.sin_family = AF_INET;
addr.sin_port = htons(40650);
// set and accept connection on socket
inet_pton(AF_INET, "127.0.0.1", &addr.sin_addr);
bind(sd,(struct sockaddr*)&addr, sizeof(addr));
listen(sd,1);
int sd_i = accept(sd, 0, 0);
//re-route stdout and stderr to socket
close(1);
dup(sd_i);
close(2);
dup(sd_i);
// open read/write file descriptor to socket
FILE* fp = fdopen(sd_i,"r+");
// emulate R prompt
write(fp,"> ");
// (attempt to) redirect R's stdout to stderr
R.parseEvalQ("sink(stderr(),type=\"output\");");
// read from socket and pass commands to RInside
while( read(fp) )
{
try
{
// skip empty input
if(request_cpp == "")
{
write(fp, "> ");
continue;
}
else if(request_cpp == "q()")
{
break;
}
else
{
// clear string stream
oss.str("");
// wrap command in try
oss << "try(" << request_cpp << ");" << endl;
// send command
R.parseEvalQ(oss.str());
}
}
catch(exception e)
{
// print exception to console
write(fp, e.what());
}
write(fp, "> ");
}
fclose(fp);
close(sd_i);
exit(0);
}
I missed this weeks ago as you didn't use the 'r' tag.
Seems like you are re-implementing Simon's trusted rserver. Why not use that directly?
Otherwise, for Rcpp question, consider asking on our rcpp-devel list.

Resources