I am totally newbie in c#, .net core 2 and protocol buffers but I have to work with thoses 3 technologies for a personnel project (server/client architecture). I have some questions about serialization/deserialization in multiple message.
I have already see this:
https://developers.google.com/protocol-buffers/docs/techniques#streaming
So I know it's need a special technique. After a little google session I found something about put and pop limits in c++ but I haven't see docs on c#.
I have an another question, does google protocol buffers handle nicely reading ? My sockets are monitored with select so when I want to read my messages (using https://developers.google.com/protocol-buffers/docs/reference/csharp/class/google/protobuf/message-parser#class_google_1_1_protobuf_1_1_message_parser_1a110e5d9bc61837e369e5deb093f59161)
I am not sure how protobuf will stop reading (I don't want to read beyond the data available on the socket because it will make my server blocking...)
Does it manage it ? Thank you...
Standalone protobufs aren't delimited in any way - they don't encode their length and have no fixed start nor end.
But the API gives you some tools for sending and storing multiple messages - specifically, you can use WriteDelimitedTo() to write multiple protobufs to some output, and then read them using parseDelimitedFrom()
Related
I have just started taking a look at rocksdb and was able to build a small springboot based app to perform basic CRUD operations on it. However, I was wondering if there is a ui tool that can be used to query or browse the data in rocksdb.
I am not sure if this is a valid question, but something like pgadmin for postgres or a client utility that can be used to browse through the data in this db?
Thanks, HK.
No, there isn't.
The reason for that is that there is no way a GUI client can know how to deserialize your data format.
I might save my values as pure strings - you might have it as json bytes - someone else might use protobuf - how would the gui client know how to deserialize it and show it in the UI?
Or would it just show the bytes? Which is unlikely to be useful
I have developed a Client/Server application for IM with Qt. So far messages are sent and displayed at the client side, but when the program is closed the messages are no longer available since a proper storage is missing.
I would like to keep the messages on the client devices and avoid to store everything on the server. I don't want to use a DB either since it needs to be installed and I would like to keep everything quite easy.
Therefore I was thinking of simply storing everything in an encrypted file, but I couldn't think of a proper format to do that.
Has anyone experience with that or any suggestions how to save the messages from different clients?
You do have a concern with data integrity in face of unplanned termination of your software, due to bugs in your code, transient hardware errors, power outages, etc. That's the problem that everyone using "plain files" usually ignores, as it's a hard problem to solve and requires extensive testing and know-how.
That's why you should use an embedded database. It will solve that, and many other problems as well. SQLite is a de-facto standard for applications such as yours. You can add any encryption you wish, as SQLite provides hooks that let you implement writing and reading of the pages. You'd do the encryption there.
One little-appreciated aspect of SQLite specifically is the amount of testing it gets during development. The test harness, most of it non-public, is probably worth way more than the published SQLite code (>1M USD). SQLite is used in aerospace applications, e.g. IIRC in code classified as DAL-B under DO-178B.
I'm looking for use-cases for using reactive streams within a servlet container (or just a HTTP server).
The Jetty project has started being asked: "is Jetty reactive?" and we've noticed the proposal to add reactive streams to java 9.
So we've started some experiments with using the reactive streams API for async servlet IO, which are interesting enough..... but lack any focus because we lack real use-cases to focus which concerns are most important.
So does anybody have any good use-cases that they could share/explain so that we can direct our jetty experiments to meet their needs. The sort of thing I've imagined is having a RS based database publisher sending objects all the way out on a HTTP response or websocket connection using Flow.Processors for the conversions along the way.
A viable use case is when consuming the POSTing of multi-part form data, particularly when uploading files.
The Typesafe ConductR project (disclaimer: I'm the Tech Lead for it), receives multi-part form data when a user loads a bundle. We use akka-streams/http.
We read off the first two parts of the stream as our protocol specifies that they must declare some meta data in order for us to know which node to write the bundles to. After some validation, we then determine the node to write them to, and connect the partially consumed stream. Thus the node that receives the request to upload the bundle negotiates which node it is going to write it to, while not having to consume the entire stream (which could be 200MB) and then write it out again.
Writing out multi-part form data is also a great use-case given that you can stream the file from disk as a source and pass it on to some http endpoint i.e. the client-side of what I describe above.
The benefits with both use-cases are that you minimise the amount of memory needed to move bytes over a network, and you only perform file IO where it is necessary.
A groups of friends are working on a little game that would listen to the microphone as part of the interaction. We've tinkered with processing and flex. What we'd like to know is if anyone has succeeded in:
recording from the microphone using a web app
performing an FFT on this microphone data
In the case of flex, according to the docs "Because sound data from a microphone...do not pass through the global SoundMixer object, the SoundMixer.computeSpectrum() method will not return data from those sources."1
Your footnote kind of answered your own question. :) No, it is not possible to read the raw bytes from the microphone from the client side. It is possible that Adobe will implement this in Flash 11, but don't hold your breath for it.
If you set up a flash server, such as Red5, then you can read the raw stream on the backend and send FFT data back to the client over AMF. This is actually possible to do with very low latency, though it may still be too high depending on the nature of your application. There are several examples on the Red5 page about how to accomplish things similar to this using a Java webapp working on the backend.
There is a lot of people requesting this feature.
You may see many workaround in getMicrophone().
I am doing file transfers, but the filereference API doesn't support file chunking. Has anyone done this before? For example, I would like to be able to upload a 1 gig file from an AIR client to a custom PHP/Java/etc. service.
It seems that all you should have to do is use the upload() routine. The php or java service should be doing the chunking.
var myHugeFile = new air.File('myHugeLocal.file');
myHugeFile.upload(new URLRequest("http://your.website.com/uploadchunker.php"));
There is a much more elaborate example of using filereference in the adobe learning area here:
http://www.adobe.com/devnet/air/flex/articles/uploading_air_app_to_server.html
Three options jump out on this:
Use an FTP service that supports resumable transfers, assuming flash supports this as well. Maybe not an option if you are wanting to communicate with a custom service of your own.
Leverage the http file part header support. Only applicable if AIR allows access to the appropriate http headers (content-range & content-length). This is what BITS does. Probably a bit harder to implement.
Hand roll your own TCP or UDP protocol exchange. Not for the faint of heart. I'd look in the OSS space before going this route.
I think FileReference does chunk, at least that is what I have observed. Using a tool like Fiddler, you can watch it in action. If you analyze the outgoing headers of a FileReference upload, they are chunked.
If resumes are what you're after, I cannot say how you would go about that with FileReference. I have uploaded small files in generic posts, but that requires the flash/air client to load all bytes into the app. In Air that may or may not crash flash with a 1GB file (depends on your system I guess).