How do I send multiple files with the Adafruit CircuitPython HTTPServer library? - adafruit-circuitpython

I've made a webserver using CircuitPython. I can't send multiple files with HTTPResponse using the HTTPServer library. How can I do this?

Related

Connection Arduino on Google Colab

I have tried to connect Arduino Uno to Python code on Colaboratory, to send information to the Arduino using Pysireal library, but I didn't success, I kept reserve this Error on Colab:
" SerialException: [Errno 2] could not open port COM5: [Errno 2] No such file or directory: 'COM5' "
I'm sure that this port is connected, but Colab can't find the port because it’s on local environment.
What can I do to solve this Error ?? is there any way that can I connect my Arduino to Google Colab ?
Hakam Salti.
Thanks.
The Colab instance is connected to computer on the Google cloud (unless you've set up a local instance): the code doesn't execute on your machine, your typing code into a web interface that remotely runs that code, returns the result and it gets displayed back on that interface.
The Arduino is connected to your computer (a PC by the looks of the serial port).
Your question doesn't specify which way the data goes: send Arduino data to Colab, send Colab data to Arduino or bidirectional.
If you had a WIFI connected microcontroller, you could push the data online through an API, like Firebase
For USB, you'd need this sort of connection:
Arduino (OS/serial driver) <-> Browser <-> Colab
To connect the Arduino to the browser you'd need to use WebSerial or an app that has serial access that can also act as a web server (such as a WebSocket server). Since you're using Python for colab you can write a script on your PC that uses pyserial and a websocket server such as Tornado, Flask, etc. (p5.js does something like this with electron in JS and they have prebuilt apps)
The second part is getting that data which is now available to your browser, but locally, available to the Colab notebook. There are multiple ways of doing this, but this WebCam example looks like a good starting point.
Another variant of this might be:
Write a local script that acts as basic web server (http/websocket) and can access the serial port
make that local web server acessible from the internet (ngrok can help here)
access that websocket version from python (via a websocket client or http client pip package)
Update I've posted a couple of options using p5.serialport here.
For reference here are a couple of tested options using the afore mentioned p5.serial (and it's p5.serialcontrol utility):
Option 1: use Jupyter's HTML feature to run client side code (p5.serial) connecting to the p5.serialcontrol utility on your computer:
from google.colab import files
from IPython.display import HTML, Audio
from google.colab.output import eval_js
from base64 import b64decode
C_HTML = """
<script language="javascript" type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.1/p5.min.js"></script>
<script language="javascript" type="text/javascript" src="https://cdn.jsdelivr.net/npm/p5.serialserver#0.0.29/lib/p5.serialport.js" onload="setupSerial();" onerror="console.warn(e)";></script>
<script>
const serialPort = 'COM13';
let serial;
let isOn = false;
function setupSerial(){
serial = new p5.SerialPort();
serial.open(serialPort);
setInterval(blink, 1000);
console.log("serial setup complete");
}
function blink(){
isOn = !isOn;
if(serial){
serial.write(isOn ? '1' : '0');
}
}
serialInterval = setInterval(checkSerial,500);
function checkSerial(){
console.log('p5.SerialPort',p5.SerialPort);
if(p5.SerialPort){
clearInterval(serialInterval);
setupSerial();
}
}
</script>
"""
def run():
display(HTML(C_HTML))
run()
Option 2: use a reverse tunnel (ngrok) to have the Python side connect to p5.serialcontrol via WebSockets (though you'd need to compose the messages p5.serialcontrol expects maually):
run p5.serialcontrol
run ngrok tcp 8081 from Terminal/Command Prompt (note you may need to setup a free to use auth token for TCP)
install webocket-client on Colab and connect to the websocket (note as opposed to using the p5.serial library in JS, you'd manually put together the messaages to send to p5.serialcontrol's websocket server (e.g. '{"method":"openserial","data":{"serialport":"COM13","serialoptions":{}}}' to open the serial port, '{"method":"write","data":"1"}' to write '1' to serial, etc.))
To install websocket-client in colab you'd use:
!pip install -q websocket-client
and here's an example that turns on LED on for 1 second then off (using the above Arduino example):
from time import sleep
import websocket
# when the websocket is open send a serial open command (on port COM13) then send a '1' then a '0' with 1 second in between
def on_open(ws):
ws.send('{"method":"openserial","data":{"serialport":"COM13","serialoptions":{}}}')
sleep(1)
print('sending ON')
wsapp.send('{"method":"write","data":"1"}')
sleep(1)
print('sending OFF')
wsapp.send('{"method":"write","data":"0"}')
def on_message(ws, message):
print(message)
def on_error(wsapp, err):
print("Got a an error: ", err)
wsapp = websocket.WebSocketApp("ws://YOUR_NGROK_TCP_SERVER_HERE",
# YOUR_NGROK_TCP_SERVER_HERE example #.tcp.ngrok.io:#####,
on_message = on_message,
on_error=on_error)
wsapp.on_open = on_open
wsapp.run_forever()
(also note run_forever() is a blocking loop: based on your application you may want to manually run open and control a websocket connection (as opposed to using WebSocketApp) or use threading, depending on what makes)

How to post data as stream with httpclient in .net core?

Here is my scenario:
User speaks continuously to a microphone on client side, and the client code need post the recorded audio data to server chunk by chunk, though HTTP protocol.
Server side can receive and process the audio data chunk by chunk.
The question is about the code on client side.
Previously using dotnet framework I can implement the posting as stream (chunks) with below code:
https://learn.microsoft.com/en-us/dotnet/api/system.net.httpwebrequest.sendchunked?view=netframework-4.8#examples
But it doesn't work after switching to dotnet core, which is a known issue mentioned here:
https://github.com/dotnet/runtime/issues/18632#issuecomment-470611032
I saw httpclient was recommended by above thread.
But I didn't find out any sample code for my scenario using httpclient.
For my scenario I need get connected to the server, and then get an input stream based on this connection, and then write my recorded audio from microphone to server chunk by chunk through this stream.
This can be done by using the sample code in above link.
How should I write the code to implement the same effect using httpclient?

protobuf vs gRPC

I try to understand protobuf and gRPC and how I can use both. Could you help me understand the following:
Considering the OSI model what is where, for example is Protobuf at layer 4?
Thinking through a message transfer how is the "flow", what is gRPC doing what protobuf misses?
If the sender uses protobuf can the server use gRPC or does gRPC add something which only a gRPC client can deliver?
If gRPC can make synchronous and asynchronous communication possible, Protobuf is just for the marshalling and therefore does not have anything to do with state - true or false?
Can I use gRPC in a frontend application communicating instead of REST or GraphQL?
I already know - or assume I do - that:
Protobuf
Binary protocol for data interchange
Designed by Google
Uses generated "Struct" like description at client and server to un-/-marshall message
gRPC
Uses protobuf (v3)
Again from Google
Framework for RPC calls
Makes use of HTTP/2 as well
Synchronous and asynchronous communication possible
I again assume its an easy question for someone already using the technology. I still would thank you to be patient with me and help me out. I would also be really thankful for any network deep dive of the technologies.
Protocol buffers is (are?) an Interface Definition Language and serialization library:
You define your data structures in its IDL i.e. describe the data objects you want to use
It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
You define your data structures
You add your rpc method definitions
It provides code to serve up and call the method signatures over a network
You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
True
Yes you can
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
gRPC is an open-source framework developed by google
It allows us to create Request & Response for RPC and handle rest by the framework
REST is CRUD oriented but grpc is API oriented(no constraints)
Build on top of HTTP/2
Provides >>>>> Auth, Loadbalancing, Monitoring, logging
[HTTP/2]
HTTP1.1 has released in 1997 a long time ago
HTTP1 opens a new TCP connection to a server at each request
It doesn't compress headers
NO server push, it just works with Req, Res
HTTP2 released in 2015 (SPDY)
Supports multiplexing
client & server can push messages in parallel over the same TCP connection
Greatly reduces latency
HTTP2 supports header compression
HTTP2 is binary
proto buff is binary so it is a great match for HTTP2
[TYPES]
Unary
client streaming
server streaming
Bi directional streaming
grpc servers are Async by default
grpc clients can be sync or Async
protobuff
Protocol buffers are language agnostic
Parsing protocol buffers(binary format) is less CPU intensive
[Naming]
Use camel case for message names
underscore_seperated for fields
Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
[Comments]
Support //
Support /* */
[Advantages]
Data is fully Typed
Data is fully compressed (less bandwidth usage)
Schema(message) is needed to generate code and read the code
Documentation can be embedded in the schema
Data can be read across any language
Schema can evolve any time in a safe manner
faster than XML
code is generated for you automatically
Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
Lots of RPC frameworks, including grpc use protocol buffers to exchange data
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library.
There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database.
RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on.
Avro is an example of another (binary) serialization library.
gRPC (google Remote Procedure Call) is a client-server structure.
Protocol buffers are a language-neutral, platform-neutral extensible mechanism for serializing structured data.
service Greeter {
rpc SayHello (HelloRequest) returns (HelloResponse) {}
}
message HelloRequest {
string myname = 1;
}
message HelloResponse {
string responseMsg = 1;
}
Protocol buffer is used to exchange data between gRPC client and gRPC Server. It is a protocol between gRPC client and gRPC Server. Protocol buffer is implemented as a .proto file in gRPC project. It defines interface, e.g. service, which is provided by server-side and message format between client and server, and rpc methods, which are used by the client to access the server.
Both client and side have the same proto files. (One real example: envoy xds grpc client side proto files, server side proto files.) It means that both the client and server know the interface, message format, and the way that the client accesses services on the server side.
The proto files (e.g. protocol buffer) will be compiled into real language.
The generated code contains both stub code for clients to use and an abstract interface for servers to implement, both with the method defined in the service.
service defined in the proto file (e.g. protocol buffer) will be translated as abstract class xxxxImplBase (e.g. interface on the server side).
newStub(), which is a synchronous call, is the way to implement a remote procedure call (e.g. rpc in the proto file).
And the methods which build request and response messages are also implemented in the generated files.
I re-implemented simple client and server-side samples based on samples in the official doc. cpp client, cpp server, java client, java server, springboot client, springboot server
Recommended Useful Docs:
cpp/helloworld/README.md#generating-grpc-code,
cpp/basics/#generating-client-and-server-code,
cpp/basics/#defining-the-service,
generated-code/#client-stubs,
a blocking/synchronous stub
StreamObserver
how-to-use-grpc-with-spring-boot
Others: core-concepts,
gRPC can use protocol buffers as both its Interface Definition Language (IDL) and as its underlying message interchange format
In simplest form grpc is like a public vechicle.It will exchange data between client and server.
The protocol Buffer is the protocol like your bus ticket,that decides where you should go or shouldn't go.

Upload file via SSH2 protocol using QNetworkAccessManager

I need to upload an image file on server using ssh2 protocol on port 22. SSH2 protocol with QNetworkAccessManger is not popular on Google either. Here is my code.
QUrl uploadUrl("ssh2://192.168.10.227/var/www/html/img/"+mImgFile);
uploadUrl.setUserName("xxxxxx");
uploadUrl.setPassword("xxxxxx");
uploadUrl.setPort(22);
qDebug() << uploadUrl.toString();
QNetworkRequest uploadReq(uploadUrl);
mReply = mNetworkManager->put(uploadReq, &file);
connect(mReply, SIGNAL(uploadProgress(qint64, qint64)), this, SLOT(uploadProgress(qint64, qint64)));
With URL-scheme as "ssh2" or "sftp" or "ssh2.sftp"; it outputs that Protocol is unknown. And the reason i used strange-looking "ssh2.sftp", is here(just a little php code to view).
I want to know whether ssh2 can be used with qnetworkaccessmanager at all? If yes, what is the correct URL-format to upload an image file?
There is no SSH support in Qt, for example see
Howto implement SFTP with Qt/QNetworkAccessManager (C++) and How to easily establish an SSH connection in Qt?
The list of supported URL schemes in QNetworkAccessManager can be obtained by QNetworkAccessManager::supportedSchemes() that is ("ftp", "file", "qrc", "http", "https", "data") in default Qt releases (https is supported only if external OpenSSL library is found, since it is also not supplied with Qt).
So, it is still needed to use external C library libssh2 to work with SSH2 protocol using native sockets.
I found here that there was some old Qt extension LibQxt with Qt SSH support. However, it is no longer maintained.

Proxy a rtmp stream

How could I proxy a rtmp stream?
I have two raspberry pi streaming live video from raspicams on my LAN. Each raspberry pi sends the video to ffmpeg which wraps in flv and sends to crtmpserver.
A third server using nginx, has a static html page with two instances of jwplayer, each pointing to one raspberry pi.
The setup is just like this one.
The web server uses authentication and I'd like streams not to be public too.
I'm thinking of trying nginx-rtmp-module, but I am not sure if it would help me. Also, it seems dormant and has many open issues.
I'm open to suggestions, thanks in advance!
You can use MonaServer with this client (copy it into the www/ directory of MonaServer) which listen on the udp port 6666 and wait for an flv file to publish it with the name "file".
Then you should already be able to play your stream with jwplayer (with the address rtmp:///file) or with any other player. MonaServer support the HTTP protocol so you can host your html page without nginx if you want.
Now if you want to filter the subscription to "file" you need to write a client:onSubscribe function in your main.lua script, just like this :
function onConnection(client)
INFO("Connection from ",client.address)
function client:onSubscribe(listener)
INFO("Subscribing to ", listener.publication.name, "...")
if not client.right then
error("no rights to play it")
end
end
end
(Here you need to change the "not client.right" and implement your authentication function for your purpose)
Going further you could use another flash video client that support RTMFP in order to handle a large number of clients. Contact me (jammetthomas AT gmail.com) for more informations.

Resources