nghttp2: Using server-sent events to be use by EventSource - server-sent-events

I'm using nghttp2 to implement a REST server which should use HTTP/2 and server-sent events (to be consumed by an EventSource in the browser). However, based on the examples it is unclear to me how to implement SSE. Using res.push() as in asio-sv.cc doesn't seem to be the right approach.
What would be the right way to do it? I'd prefer to use nghttp2's C++ API, but the C API would do as well.

Yup, I did something like that back in 2018. The documentation was rather sparse :).
First of all, ignore response::push because that's the HTTP2 push -- something for proactively sending unsolicited objects to the client before it requests them. I know it sounds like what you need, but it is not -- the typical use case would be proactively sending a CSS file and some images along with the originally requested HTML page.
The key thing is that your end() callback must eventually return NGHTTP2_ERR_DEFERRED whenever you run out of data to send. When your application somehow obtains more data to be sent, call http::response::resume().
Here's a simple code. Build it as g++ -std=c++17 -Wall -O3 -ggdb clock.cpp -lssl -lcrypto -pthread -lnghttp2_asio -lspdlog -lfmt. Be careful, modern browsers don't do HTTP/2 over a plaintext socket, so you'll need to reverse-proxy it via something like nghttpx -f '*,8080;no-tls' -b '::1,10080;;proto=h2'.
#include <boost/asio/io_service.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/signals2.hpp>
#include <chrono>
#include <list>
#include <nghttp2/asio_http2_server.h>
#define SPDLOG_FMT_EXTERNAL
#include <spdlog/spdlog.h>
#include <thread>
using namespace nghttp2::asio_http2;
using namespace std::literals;
using Signal = boost::signals2::signal<void(const std::string& message)>;
class Client {
const server::response& res;
enum State {
HasEvents,
WaitingForEvents,
};
std::atomic<State> state;
std::list<std::string> queue;
mutable std::mutex mtx;
boost::signals2::scoped_connection subscription;
size_t send_chunk(uint8_t* destination, std::size_t len, uint32_t* data_flags [[maybe_unused]])
{
std::size_t written{0};
std::lock_guard lock{mtx};
if (state != HasEvents) throw std::logic_error{std::to_string(__LINE__)};
while (!queue.empty()) {
auto num = std::min(queue.front().size(), len - written);
std::copy_n(queue.front().begin(), num, destination + written);
written += num;
if (num < queue.front().size()) {
queue.front() = queue.front().substr(num);
spdlog::debug("{} send_chunk: partial write", (void*)this);
return written;
}
queue.pop_front();
spdlog::debug("{} send_chunk: sent one event", (void*)this);
}
state = WaitingForEvents;
return written;
}
public:
Client(const server::request& req, const server::response& res, Signal& signal)
: res{res}
, state{WaitingForEvents}
, subscription{signal.connect([this](const auto& msg) {
enqueue(msg);
})}
{
spdlog::warn("{}: {} {} {}", (void*)this, boost::lexical_cast<std::string>(req.remote_endpoint()), req.method(), req.uri().raw_path);
res.write_head(200, {{"content-type", {"text/event-stream", false}}});
}
void onClose(const uint32_t ec)
{
spdlog::error("{} onClose", (void*)this);
subscription.disconnect();
}
ssize_t process(uint8_t* destination, std::size_t len, uint32_t* data_flags)
{
spdlog::trace("{} process", (void*)this);
switch (state) {
case HasEvents:
return send_chunk(destination, len, data_flags);
case WaitingForEvents:
return NGHTTP2_ERR_DEFERRED;
}
__builtin_unreachable();
}
void enqueue(const std::string& what)
{
{
std::lock_guard lock{mtx};
queue.push_back("data: " + what + "\n\n");
}
state = HasEvents;
res.resume();
}
};
int main(int argc [[maybe_unused]], char** argv [[maybe_unused]])
{
spdlog::set_level(spdlog::level::trace);
Signal sig;
std::thread timer{[&sig]() {
for (int i = 0; /* forever */; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds{666});
spdlog::info("tick: {}", i);
sig("ping #" + std::to_string(i));
}
}};
server::http2 server;
server.num_threads(4);
server.handle("/events", [&sig](const server::request& req, const server::response& res) {
auto client = std::make_shared<Client>(req, res, sig);
res.on_close([client](const auto ec) {
client->onClose(ec);
});
res.end([client](uint8_t* destination, std::size_t len, uint32_t* data_flags) {
return client->process(destination, len, data_flags);
});
});
server.handle("/", [](const auto& req, const auto& resp) {
spdlog::warn("{} {} {}", boost::lexical_cast<std::string>(req.remote_endpoint()), req.method(), req.uri().raw_path);
resp.write_head(200, {{"content-type", {"text/html", false}}});
resp.end(R"(<html><head><title>nghttp2 event stream</title></head>
<body><h1>events</h1><ul id="x"></ul>
<script type="text/javascript">
const ev = new EventSource("/events");
ev.onmessage = function(event) {
const li = document.createElement("li");
li.textContent = event.data;
document.getElementById("x").appendChild(li);
};
</script>
</body>
</html>)");
});
boost::system::error_code ec;
if (server.listen_and_serve(ec, "::", "10080")) {
return 1;
}
return 0;
}
I have a feeling that my queue handling is probably too complex. When testing via curl, I never seem to run out of buffer space. In other words, even if the client is not reading any data from the socket, the library keep invoking send_chunk, asking for up to 16kB of data at a time for me. Strange. I have no idea how it works when pushing more data more heavily.
My "real code" used to have a third state, Closed, but I think that blocking events via on_close is enough here. However, I think you never want to enter send_chunk if the client has already disconnected, but before the destructor gets called.

Related

Segmentation Faults when testing typed actors with custom atoms

I am trying to use the testing macros with my actors but I am getting a lot of segmentation faults. I believe I have narrowed down the problem to my use of custom atoms. To demonstrate the issue I modified the 'simple actor test' from here to make the adder strongly typed.
#include "caf/test/dsl.hpp"
#include "caf/test/unit_test_impl.hpp"
#include "caf/all.hpp"
namespace {
struct fixture {
caf::actor_system_config cfg;
caf::actor_system sys;
caf::scoped_actor self;
fixture() : sys(cfg), self(sys) {
// nop
}
};
using calculator_type = caf::typed_actor<caf::result<int>(int, int)>;
calculator_type::behavior_type adder() {
return {
[=](int x, int y) {
return x + y;
}
};
}
} // namespace
CAF_TEST_FIXTURE_SCOPE(actor_tests, fixture)
CAF_TEST(simple actor test) {
// Our Actor-Under-Test.
auto aut = self->spawn(adder);
self->request(aut, caf::infinite, 3, 4).receive(
[=](int r) {
CAF_CHECK(r == 7);
},
[&](caf::error& err) {
// Must not happen, stop test.
CAF_FAIL(err);
});
}
CAF_TEST_FIXTURE_SCOPE_END()
This works great. I then took it one step further to add a custom atom called "add_numbers"
#include "caf/test/dsl.hpp"
#include "caf/test/unit_test_impl.hpp"
#include "caf/all.hpp"
CAF_BEGIN_TYPE_ID_BLOCK(calc_msgs, first_custom_type_id)
CAF_ADD_ATOM(calc_msgs, add_numbers)
CAF_END_TYPE_ID_BLOCK(calc_msgs)
namespace {
struct fixture {
caf::actor_system_config cfg;
caf::actor_system sys;
caf::scoped_actor self;
fixture() : sys(cfg), self(sys) {
// nop
}
};
using calculator_type = caf::typed_actor<caf::result<int>(add_numbers, int, int)>;
calculator_type::behavior_type adder() {
return {
[=](add_numbers, int x, int y) {
return x + y;
}
};
}
} // namespace
CAF_TEST_FIXTURE_SCOPE(actor_tests, fixture)
CAF_TEST(simple actor test) {
// Our Actor-Under-Test.
auto aut = self->spawn(adder);
self->request(aut, caf::infinite, add_numbers_v, 3, 4).receive(
[=](int r) {
CAF_CHECK(r == 7);
},
[&](caf::error& err) {
// Must not happen, stop test.
CAF_FAIL(err);
});
}
CAF_TEST_FIXTURE_SCOPE_END()
This compiles fine but produces a segmentation fault at runtime. I suspect it has something to do with the fact that I am not passing calc_msgs to anything. How do I do that? Or is something else going on?
The ID block adds the compile-time meta data. But you also need to initialize some run-time state via
init_global_meta_objects<caf::id_block::calc_msgs>();
Ideally, you initialize this state before calling any other CAF function. In particular before initializing the actor system. CAF itself uses custom main functions for its test suites to do that (cf. core-test.cpp). In your case, it would look somewhat like this:
int main(int argc, char** argv) {
using namespace caf;
init_global_meta_objects<id_block::calc_msgs>();
core::init_global_meta_objects();
return test::main(argc, argv);
}
This probably means that you would need to put the type ID block into a header file. This is nothing special about the unit tests, though. If you run a regular CAF application, you need to initialize the global meta objects as well. CAF_MAIN can do that for you as long as you pass it the type ID block(s) or you need to call the functions by hand. The CAF manual covers this in a bit more detail here: https://actor-framework.readthedocs.io/en/0.18.5/ConfiguringActorApplications.html#configuring-actor-applications.
If this is your only test at the moment, you can define CAF_TEST_NO_MAIN before including caf/test/unit_test_impl.hpp and then add the custom main function. Once you have multiple test suites, it makes sense to move the main to its own file, though.

How to read a pcap file generated by tcpdump that contains large UDP packets and reassemble the IP fragmented packets?

I would like to read a pcap file generated by tcpdump that contains large UDP packets that have undergone IPV4 fragmentation. The original packets are of a size of around 22000 bytes.
In C++, I would use libtins with its IPV4Reassembler. Is there a way that I can do something similar in Rust?
Currently in Rust here is what I have written so far: a highly incomplete first-pass attempt (using crate pnet):
use pnet::packet::{
ethernet::{EtherTypes, EthernetPacket},
ip::IpNextHeaderProtocols,
ipv4::Ipv4Packet,
udp::UdpPacket,
Packet,
};
struct Ipv4Reassembler {
cap: pcap::Capture<pcap::Offline>,
}
impl Iterator for Ipv4Reassembler {
type Item = Vec<u8>;
fn next(&mut self) -> Option<Self::Item> {
let mut payload = Vec::<u8>::new();
while let Some(packet) = self.cap.next().ok() {
// todo: handle packets other than Ethernet packets
let ethernet = EthernetPacket::new(packet.data).unwrap();
match ethernet.get_ethertype() {
EtherTypes::Ipv4 => {
let ipv4_packet = Ipv4Packet::new(ethernet.payload()).unwrap();
// dbg!(&ipv4_packet);
// todo: discard incomplete packets
// todo: construct header for reassembled packet
// todo: check id, etc
let off: usize = 8 * ipv4_packet.get_fragment_offset() as usize;
let end = off + ipv4_packet.payload().len();
if payload.len() < end {
payload.resize(end, 0);
}
payload[off..end].clone_from_slice(ipv4_packet.payload());
if ipv4_packet.get_flags() & 1 == 0 {
return Some(payload);
}
}
_ => {}
}
}
None
}
}
fn main() {
let pcap_path = "os-992114000702.pcap";
let reass = Ipv4Reassembler {
cap: pcap::Capture::from_file(&pcap_path).unwrap(),
};
for payload in reass {
let udp_packet = UdpPacket::new(&payload).unwrap();
dbg!(&udp_packet);
dbg!(&udp_packet.payload().len());
}
}
In C++ here is the code I would use (using libtins):
#include <tins/ip_reassembler.h>
#include <tins/packet.h>
#include <tins/rawpdu.h>
#include <tins/sniffer.h>
#include <tins/tins.h>
#include <tins/udp.h>
#include <iostream>
#include <string>
void read_packets(const std::string &pcap_filename) {
Tins::IPv4Reassembler reassembler;
Tins::FileSniffer sniffer(pcap_filename);
while (Tins::Packet packet = sniffer.next_packet()) {
auto &pdu = *packet.pdu();
const Tins::Timestamp &timestamp = packet.timestamp();
if (reassembler.process(pdu) != Tins::IPv4Reassembler::FRAGMENTED) {
const Tins::UDP *udp = pdu.find_pdu<Tins::UDP>();
if (!udp) {
continue;
}
const Tins::RawPDU *raw = pdu.find_pdu<Tins::RawPDU>();
if (!raw) {
continue;
}
const Tins::RawPDU::payload_type &payload = raw->payload();
std::cout << "Packet: " << payload.size() << std::endl;
// do something with the reassembled packet here
}
}
}
int main() {
const std::string pcap_path = "os-992114000702.pcap";
read_packets(pcap_path);
}
g++ -O3 -o pcap pcap.cpp -ltins
It seems that one solution is to implement RFC815 but I am not sure how to do that in Rust. I have found:
this old pull request to smolltcp but it appears to have been abandoned.
Fuschia reassembly.rs but I have no idea how to use this outside of Fuschia.

Unable to push data to server using Asynchronous RPC with DCE pipes

I want to push data to the server using Asynchronous RPC with pipes. Here is my code:
//file: Xasyncpipe.idl:
interface IMyAsyncPipe
{
//define the pipe type
typedef pipe int ASYNC_INTPIPE;
int MyAsyncInputPipe(
handle_t hBinding,
[in] ASYNC_INTPIPE *inpipe) ;
};
//file:Xasyncpipe.acf:
interface IMyAsyncPipe
{
[async] MyAsyncInputPipe () ;
} ;
//file:Client.cpp
mian()
{
// Creates a binding handle.
...
RPC_ASYNC_STATE Async;
status = RpcAsyncInitializeHandle(&Async, sizeof(RPC_ASYNC_STATE));
Async.UserInfo = NULL;
Async.NotificationType = RpcNotificationTypeIoc;
Async.u.IOC.hIOPort = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 0);
ASYNC_INTPIPE inputPipe;
// Calls the RPC function.
MyAsyncInputPipe(&Async, hBinding, &inputPipe);
}
//file:Server.cpp
void MyAsyncInputPipe(PRPC_ASYNC_STATE state, handle_t hBinding, ASYNC_INTPIPE *pipe)
{
std::cout << "Input Test" << std::endl;
}
I added a breakpoint in function MyAsyncInputPipe, and the breakpoint is never triggered.
I change the Xasyncpipe.idl from [in] ASYNC_INTPIPE *inpipe to [out] ASYNC_INTPIPE *inpipe, break point is triggered.
Does anyone know the reason?
Server will not receive data until client side pipe pushes a terminal signal.
//push terminal signal
inputPipe.push(inputPipe.state, NULL, 0);

File changes handle, QSocketNotifier disables due to invalid socket

I am developing for a touch screen and need to detect touch events to turn the screen back on. I am using Qt and sockets and have run into an interesting issue.
Whenever my QSocketNotifier detects the event it sends me infinite notices about it. Therefore I need to close and open the event file to cycle the notifier (inputNotifier in the below code). The problem usually arises several minutes after the device has been running and the file (inputDevice) suddenly changes it's handle from 24 to something else (usually 17).
I am not sure what to do, because the initial connect statement is linked to the initial Notifier pointer. If I create a new Notifier using the new handle, the connect is invalid. As far as I can tell there is no option to set a new socket value on a running QSocketNotifier. Suggestions? The relevant code is below:
#include "backlightcontroller.h"
#include <QTimer>
#include <QFile>
#include <syslog.h>
#include <QDebug>
#include <QSocketNotifier>
BacklightController::BacklightController(QObject *parent) :
QObject(parent)
{
backlightActive = true;
// setup timer
trigger = new QTimer;
trigger->setSingleShot(false);
connect(trigger, SIGNAL(timeout()), SLOT(deactivateBacklight()));
idleTimer = new QTimer;
idleTimer->setInterval(IDLE_TIME * 1000);
idleTimer->setSingleShot(false);
connect(idleTimer, SIGNAL(timeout()), SIGNAL(idled()));
idleTimer->start();
// setup socket notifier
inputDevice = new QFile(USERINPUT_DEVICE);
if (!inputDevice->open(QIODevice::ReadOnly))
{
syslog (LOG_ERR, "Input file for Backlight controller could not been opened.");
}
else
{
inputNotifier = new QSocketNotifier(inputDevice->handle(), QSocketNotifier::Read);
inputNotifier->setEnabled(true);
connect(inputNotifier, SIGNAL(activated(int)), SLOT(activateBacklight()));
}
qDebug()<<"backlight socket: "<<inputNotifier->socket();
// read out settings-file
QString intensity = Global::system_settings->getValue("BelatronUS_backlight_intensity");
if (intensity.length() == 0) intensity = "100";
QString duration = Global::system_settings->getValue("BelatronUS_backlight_duration");
if (duration.length() == 0) duration = "180";
QString always_on = Global::system_settings->getValue("BelatronUS_backlight_always_on");
if (always_on.length() == 0) always_on = "0";
setIntensity(intensity.toInt());
setDuration(duration.toInt());
if (always_on == "0")
setAlwaysOn(false);
else
setAlwaysOn(true);
}
BacklightController::~BacklightController()
{
trigger->stop();
inputNotifier->setEnabled(false);
inputDevice->close();
delete trigger;
delete inputDevice;
delete inputNotifier;
}
void BacklightController::setCurrentIntensity(int intensity)
{
// adapt backlight intensity
QFile backlightFile("/sys/class/backlight/atmel-pwm-bl/brightness");
if (!backlightFile.open(QIODevice::WriteOnly))
{
syslog (LOG_ERR, "Backlight intensity file could not been opened.");
}
else
{
QString intensityString = QString::number(TO_BRIGHTNESS(intensity));
if (backlightFile.write(
qPrintable(intensityString), intensityString.length()
) < intensityString.length())
{
syslog (LOG_ERR, "Backlight intensity could not been changed.");
}
backlightFile.close();
}
}
void BacklightController::resetNotifier()
{
inputDevice->close();
if (!inputDevice->open(QIODevice::ReadOnly))
{
syslog (LOG_ERR, "BacklightController::%s: Input file could not been opened.", __FUNCTION__);
}
qDebug()<<"reset, handle: "<<inputDevice->handle();
//inputNotifier=QSocketNotifier(inputDevice->handle(), QSocketNotifier::Read);
// restart timer after user input
idleTimer->start();
}
void BacklightController::activateBacklight()
{
// only activate backlight, if it's off (avoid to useless fileaccess)
if (!backlightActive)
{
setCurrentIntensity(_intensity);
backlightActive = true;
emit backlightActivated();
}
// restart backlight timeout, but only if we don't want the backlight to shine all the time
if (!_alwaysOn)
trigger->start();
// reset notifier to be able to catch the next userinput
resetNotifier();
}
void BacklightController::deactivateBacklight()
{
// don't turn it off, if it's forced on
if (!_alwaysOn)
{
if (backlightActive)
{
// only deactivate backlight, if it's on (avoid to useless fileaccess)
setCurrentIntensity(BACKLIGHT_INTENSITY_OFF);
backlightActive = false;
emit backlightDeactivated();
}
}
qDebug()<<"trigger stopping";
trigger->stop();
}
void BacklightController::setIntensity(int intensity)
{
if (intensity > 100)
intensity = 100;
else if (intensity < 0)
intensity = 0;
_intensity = intensity;
// write new intensity to file if it's active at the moment
if (backlightActive)
{
setCurrentIntensity(_intensity);
trigger->start();
}
}
void BacklightController::setDuration(int duration)
{
if (duration < 1)
duration = 1;
_duration = duration;
trigger->setInterval(_duration * MS_IN_SEC);
// reset trigger after changing duration
if (backlightActive)
{
trigger->start();
}
}
void BacklightController::setAlwaysOn(bool always_on)
{
_alwaysOn = always_on;
// tell the timer what to to now
if (_alwaysOn)
{
this->activateBacklight();
trigger->stop();
}
else
{
trigger->start();
}
}
I seem to have found a working solution for now. It's not the greatest so if there are better solutions I would be interested to hear them. The reason I did not think of this before is because I thought if I had a new connect statement in a function it would have limited scope as the function ended.
The solution was to simply check for an occurrence of the handle change in the file and then create a new pointer for the notifier using that handle. Then re-enable the notifier because it has likely been disabled by now and then create a new connect statement for the pointer. This is the code I used, added just below the closing and reopening of the event file:
if(inputDevice->handle()!=inputNotifier->socket()){
inputNotifier = new QSocketNotifier(inputDevice->handle(), QSocketNotifier::Read);
inputNotifier->setEnabled(true);
connect(inputNotifier, SIGNAL(activated(int)), SLOT(activateBacklight()));
}

How to use a QFile with std::iostream?

Is it possible to use a QFile like a std::iostream? I'm quite sure there must be a wrapper out there. The question is where?
I have another libs, which requires a std::istream as input parameter, but in my program i only have a QFile at this point.
I came up with my own solution using the following code:
#include <ios>
#include <QIODevice>
class QStdStreamBuf : public std::streambuf
{
public:
QStdStreamBuf(QIODevice *dev) : std::streambuf(), m_dev(dev)
{
// Initialize get pointer. This should be zero so that underflow is called upon first read.
this->setg(0, 0, 0);
}
protected:
virtual std::streamsize xsgetn(std::streambuf::char_type *str, std::streamsize n)
{
return m_dev->read(str, n);
}
virtual std::streamsize xsputn(const std::streambuf::char_type *str, std::streamsize n)
{
return m_dev->write(str, n);
}
virtual std::streambuf::pos_type seekoff(std::streambuf::off_type off, std::ios_base::seekdir dir, std::ios_base::openmode /*__mode*/)
{
switch(dir)
{
case std::ios_base::beg:
break;
case std::ios_base::end:
off = m_dev->size() - off;
break;
case std::ios_base::cur:
off = m_dev->pos() + off;
break;
}
if(m_dev->seek(off))
return m_dev->pos();
else
return std::streambuf::pos_type(std::streambuf::off_type(-1));
}
virtual std::streambuf::pos_type seekpos(std::streambuf::pos_type off, std::ios_base::openmode /*__mode*/)
{
if(m_dev->seek(off))
return m_dev->pos();
else
return std::streambuf::pos_type(std::streambuf::off_type(-1));
}
virtual std::streambuf::int_type underflow()
{
// Read enough bytes to fill the buffer.
std::streamsize len = sgetn(m_inbuf, sizeof(m_inbuf)/sizeof(m_inbuf[0]));
// Since the input buffer content is now valid (or is new)
// the get pointer should be initialized (or reset).
setg(m_inbuf, m_inbuf, m_inbuf + len);
// If nothing was read, then the end is here.
if(len == 0)
return traits_type::eof();
// Return the first character.
return traits_type::not_eof(m_inbuf[0]);
}
private:
static const std::streamsize BUFFER_SIZE = 1024;
std::streambuf::char_type m_inbuf[BUFFER_SIZE];
QIODevice *m_dev;
};
class QStdIStream : public std::istream
{
public:
QStdIStream(QIODevice *dev) : std::istream(m_buf = new QStdStreamBuf(dev)) {}
virtual ~QStdIStream()
{
rdbuf(0);
delete m_buf;
}
private:
QStdStreamBuf * m_buf;
};
I works fine for reading local files. I haven't tested it for writing files. This code is surely not perfect but it works.
I came up with my own solution (which uses the same idea Stephen Chu suggested)
#include <iostream>
#include <fstream>
#include <cstdio>
#include <QtCore>
using namespace std;
void externalLibFunction(istream & input_stream) {
copy(istream_iterator<string>(input_stream),
istream_iterator<string>(),
ostream_iterator<string>(cout, " "));
}
ifstream QFileToifstream(QFile & file) {
Q_ASSERT(file.isReadable());
return ifstream(::_fdopen(file.handle(), "r"));
}
int main(int argc, char ** argv)
{
QFile file("a file");
file.open(QIODevice::WriteOnly);
file.write(QString("some string").toLatin1());
file.close();
file.open(QIODevice::ReadOnly);
std::ifstream ifs(QFileToifstream(file));
externalLibFunction(ifs);
}
Output:
some string
This code uses std::ifstream move constructor (C++x0 feature) specified in 27.9.1.7 basic_ifstream constructors section of Working Draft, Standard for Programming Language C++:
basic_ifstream(basic_ifstream&& rhs);
Effects: Move constructs from the
rvalue rhs. This is accomplished by
move constructing the base class, and
the contained basic_filebuf. Next
basic_istream::set_rdbuf(&sb) is called to install the contained
basic_filebuf.
See How to return an fstream (C++0x) for discussion on this subject.
If the QFile object you get is not open for read already, you can get filename from it and open an ifstream object.
If it's already open, you can get file handle/descriptor with handle() and go from there. There's no portable way of getting a fstream from platform handle. You will have to find a workaround for your platforms.
Here's a good guide for subclassing std::streambuf to provide a non-seekable read-only std::istream: https://stackoverflow.com/a/14086442/316578
Here is a simple class based on that approach which adapts a QFile into an std::streambuf which can then be wrapped in an std::istream.
#include <iostream>
#include <QFile>
constexpr qint64 ERROR = -1;
constexpr qint64 BUFFER_SIZE = 1024;
class QFileInputStreamBuffer final : public std::streambuf {
private:
QFile &m_file;
QByteArray m_buffer;
public:
explicit QFileInputStreamBuffer(QFile &file)
: m_file(file),
m_buffer(BUFFER_SIZE, Qt::Uninitialized) {
}
virtual int underflow() override {
if (atEndOfBuffer()) {
// try to get more data
const qint64 bytesReadIntoBuffer = m_file.read(m_buffer.data(), BUFFER_SIZE);
if (bytesReadIntoBuffer != ERROR) {
setg(m_buffer.data(), m_buffer.data(), m_buffer.data() + bytesReadIntoBuffer);
}
}
if (atEndOfBuffer()) {
// no more data available
return std::char_traits<char>::eof();
}
else {
return std::char_traits<char>::to_int_type(*gptr());
}
}
private:
bool atEndOfBuffer() const {
return gptr() == egptr();
}
};
If you want to be able to more things like seek, write, etc., then you'd need one of the other more complex solutions here which override more streambuf functions.
If you don't care much for performance you can always read everything from the file and dump it into an std::stringstream and then pass that to your library. (or the otherway, buffer everything to a stringstream and then write to a QFile)
Other than that, it doesn't look like the two can inter-operate. At any rate, Qt to STL inter operations are often a cause for obscure bugs and subtle inconsistencies if the version of STL that Qt was compiled with is different in any way from the version of STL you are using. This can happen for instance if you change the version of Visual Studio.

Resources