I am calling a python api endpoint which sends me the data to be written to the device. The data received is in encrypted format and I am converting the same to byte array using below
let codePoints = [];
for (let i = 0; i < encData.length; i++) {
codePoints.push(encData.codePointAt(i));
}
const arraybufferdata = Uint8Array.from(codePoints);
I am trying to write this byte array using following code:
characteristic
.writeValueWithoutResponse(arraybufferdata)
.then(() => {
console.log("successfully written to the device");
});
I am getting the below error:
I also tried writing by splitting the byte array into chunks as follows:
writeOut(data, start) {
if (start >= data.byteLength) {
console.log("successfully written to the device");
return;
}
this.mychar
.writeValueWithoutResponse(data.slice(start, start + 256))
.then(() => {
this.writeOut(data, start + 256);
});
}
writeBuffer(data, start) {
this.writeOut(data, start);
}
this.writeBuffer(arraybufferdata, 0);
This is throwing error as well. What could be the issue? The device is not accepting the data to be written either.
UI framework: Angular 7
OS: Windows 10
Browser: Chrome 85
Please help!
Found the issue:
i was writing to the wrong characteristic! (facepalm)
It's working now!
Related
I am using the following code from this site to pair and connect to a bluetooth low energy printer via the browser for my asp.net MVC Web App.
navigator.bluetooth.requestDevice({
filters: [{ services: [0xffe5] }]
})
.then(function(device) {
// Step 2: Connect to it
return device.gatt.connect();
})
.then(function(server) {
// Step 3: Get the Service
return server.getPrimaryService(0xffe5);
})
.then(function(service) {
// Step 4: get the Characteristic
return service.getCharacteristic(0xffe9);
})
.then(function(characteristic) {
// Step 5: Write to the characteristic
var data = new Uint8Array([0xbb, 0x25, 0x05, 0x44]);
return characteristic.writeValue(data);
})
.catch(function(error) {
// And of course: error handling!
console.error('Connection failed!', error);
});
This code works perfectly but what I don't understand how to do is to decouple the "Pairing", "Connecting", "Get Service" and "Write Service" from the actual writing of the characteristic. Ideally I would like to pair, connect, get the service and characteristic once. Then just continually do write values so I can print multiple things without having to have the pairing prompt show up every time. Every example I've seen has them all tied together in the .then format instead of standalone.
Basically how to decouple this:
navigator.bluetooth.requestDevice({
filters: [{ services: [0xffe5] }]
})
.then(function(device) {
// Step 2: Connect to it
return device.gatt.connect();
})
.then(function(server) {
// Step 3: Get the Service
return server.getPrimaryService(0xffe5);
})
.then(function(service) {
// Step 4: get the Characteristic
return service.getCharacteristic(0xffe9);
})
});
From This:
.then(function(characteristic) {
// Step 5: Write to the characteristic
var data = new Uint8Array([0xbb, 0x25, 0x05, 0x44]);
return characteristic.writeValue(data);
})
});
Thanks. This is my first expierence with BTLE so sorry if its a dumb question, but I'm struggling a bit.
What I've done in my Angular application was to use the async/await pattern and assign the response object to a globlal variable like this:
this.bluetoothDevice = await navigator.bluetooth.requestDevice({acceptAllDevices: true});
Then I use another variable to manage the connection:
this.serverConnected = await this.connect(this.bluetoothDevice.gatt);
And finally, I use those variables in separated methods inside my class to manage the disconnect:
onDisconnectButtonClick() {
if (!this.bluetoothDevice) {
return;
}
console.log('Disconnecting from Bluetooth Device...');
if (this.bluetoothDevice.gatt.connected) {
this.bluetoothDevice.gatt.disconnect();
} else {
console.log('> Bluetooth Device is already disconnected');
this.deviceInfo = 'You have been disconnected.'
}
}
Hope it helps you!
I am trying to get a final speech transcription/recognition result from a Fleck websocket audio stream. The method OnOpen executes code when the websocket connection is first established and the OnBinary method executes code whenever binary data is received from the client. I have tested the websocket by echoing the voice into the websocket and writing the same binary data back into the websocket at the same rate. This test worked so I know that the binary data is being sent correctly (640 byte messages with a 20ms frame size).
Therefore, my code is failing and not the service. My aim is to do the following:
When the websocket connection is created, send the initial audio config request to the API with SingleUtterance == true
Run a background task that listens for the streaming results waiting for isFinal == true
Send each binary message received to the API for transcription
When background task recognises isFinal == true, stop current streaming request and create a new request - repeating steps 1 through 4
The context of this project is transcribing all single utterances in a live phone call.
socket.OnOpen = () =>
{
firstMessage = true;
};
socket.OnBinary = async binary =>
{
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
if (firstMessage == true)
{
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "en",
},
SingleUtterance = true,
}
});
Task getUtterance = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream.Current.Results)
{
if (result.IsFinal == true)
{
Console.WriteLine("This test finally worked");
}
}
}
});
firstMessage = false;
}
else if (firstMessage == false)
{
streamingCall.WriteAsync(new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString.CopyFrom(binary, 0, 640)
}).Wait();
}
};
.Wait() is a blocking call being called in an async/await. They don't mix well and can lead to deadlocks.
Simply keep the code async all the way through
//...omitted for brevity
else if (firstMessage == false) {
await streamingCall.WriteAsync(new StreamingRecognizeRequest() {
AudioContent = Google.Protobuf.ByteString.CopyFrom(binary, 0, 640)
});
}
I'm struggling to create a simple POC for iOS PWA with a small video.
https://test-service-worker.azurewebsites.net/
I have simple service worker registration and I cache a small (700kB) video. When I'm online the page works just fine. When I turn on airplane mode and go offline, the page is still reloaded but video will not play.
This POC is based on Google Chrome example
https://googlechrome.github.io/samples/service-worker/prefetch-video/
The video from this example will not work in iOS offline for sure because it only caches 50MB. Mine is only 700kB so well below the limit.
My POC works just fine in Chrome but it won't in the latest mobile Safari (iOS 11.4).
What do I need to change in order to make this work on iOS 11.4+? Is this a bug in Safari?
It turns out, Safari is just quite strict. I'm leaving the question here - hopefully it will save someones time.
What's happening:
Safari requests only part of the video - first it will request 'range: bytes=0-1' response. It expects HTTP 206 response which will reveal size of the file
Based on the response it learns what is the length of the video and then it asks for individual byte ranges of the file (for example range: bytes=0-20000 etc.)
If your response is longer than requested Safari will immediately stop processing subsequent requests.
This is exactly what is happening in Google Chrome example and what was happening in my POC. So if you use fetch like this it will work both online & offline:
//This code is based on https://googlechrome.github.io/samples/service-worker/prefetch-video/
self.addEventListener('fetch', function(event) {
headersLog = [];
for (var pair of event.request.headers.entries()) {
console.log(pair[0]+ ': '+ pair[1]);
headersLog.push(pair[0]+ ': '+ pair[1])
}
console.log('Handling fetch event for', event.request.url, JSON.stringify(headersLog));
if (event.request.headers.get('range')) {
console.log('Range request for', event.request.url);
var rangeHeader=event.request.headers.get('range');
var rangeMatch =rangeHeader.match(/^bytes\=(\d+)\-(\d+)?/)
var pos =Number(rangeMatch[1]);
var pos2=rangeMatch[2];
if (pos2) { pos2=Number(pos2); }
console.log('Range request for '+ event.request.url,'Range: '+rangeHeader, "Parsed as: "+pos+"-"+pos2);
event.respondWith(
caches.open(CURRENT_CACHES.prefetch)
.then(function(cache) {
return cache.match(event.request.url);
}).then(function(res) {
if (!res) {
console.log("Not found in cache - doing fetch")
return fetch(event.request)
.then(res => {
console.log("Fetch done - returning response ",res)
return res.arrayBuffer();
});
}
console.log("FOUND in cache - doing fetch")
return res.arrayBuffer();
}).then(function(ab) {
console.log("Response procssing")
let responseHeaders= {
status: 206,
statusText: 'Partial Content',
headers: [
['Content-Type', 'video/mp4'],
['Content-Range', 'bytes ' + pos + '-' +
(pos2||(ab.byteLength - 1)) + '/' + ab.byteLength]]
};
console.log("Response: ",JSON.stringify(responseHeaders))
var abSliced={};
if (pos2>0){
abSliced=ab.slice(pos,pos2+1);
}else{
abSliced=ab.slice(pos);
}
console.log("Response length: ",abSliced.byteLength)
return new Response(
abSliced,responseHeaders
);
}));
} else {
console.log('Non-range request for', event.request.url);
event.respondWith(
// caches.match() will look for a cache entry in all of the caches available to the service worker.
// It's an alternative to first opening a specific named cache and then matching on that.
caches.match(event.request).then(function(response) {
if (response) {
console.log('Found response in cache:', response);
return response;
}
console.log('No response found in cache. About to fetch from network...');
// event.request will always have the proper mode set ('cors, 'no-cors', etc.) so we don't
// have to hardcode 'no-cors' like we do when fetch()ing in the install handler.
return fetch(event.request).then(function(response) {
console.log('Response from network is:', response);
return response;
}).catch(function(error) {
// This catch() will handle exceptions thrown from the fetch() operation.
// Note that a HTTP error response (e.g. 404) will NOT trigger an exception.
// It will return a normal response object that has the appropriate error code set.
console.error('Fetching failed:', error);
throw error;
});
})
);
}
});
we have a group in telegram and we have a rule says no one must leave a message in group between 23 to 7 am , I wanna delete messages comes to group between these times automatically . could anyone tell me how I can do that with telegram cli or any other telegram client?
Use new version of telegram-cli. It's not fully open source, but you can download a binary from its site. Also you can find some examples there.
I hope the following snippet in JavaScript will help you to achieve your goal.
var spawn = require('child_process').spawn;
var readline = require('readline');
// delay between restarts of the client in case of failure
const RESTARTING_DELAY = 1000;
// the main object for a process of telegram-cli
var tg;
function launchTelegram() {
tg = spawn('./telegram-cli', ['--json', '-DCR'],
{ stdio: ['ipc', 'pipe', process.stderr] });
readline.createInterface({ input: tg.stdout }).on('line', function(data) {
try {
var obj = JSON.parse(data);
} catch (err) {
if (err.name == 'SyntaxError') {
// sometimes client sends not only json, plain text process is not
// necessary, just output for easy debugging
console.log(data.toString());
} else {
throw err;
}
}
if (obj) {
processUpdate(obj);
}
});
tg.on('close', function(code) {
// sometimes telegram-cli fails due to bugs, then try to restart it
// skipping problematic messages
setTimeout(function(tg) {
tg.kill(); // the program terminates by sending double SIGINT
tg.kill();
tg.on('close', launchTelegram); // start again for updates
// as soon as it is finished
}, RESTARTING_DELAY, spawn('./telegram-cli', { stdio: 'inherit' }));
});
}
function processUpdate(upd) {
var currentHour = Date.now().getHours();
if (23 <= currentHour && currentHour < 7 &&
upd.ID='UpdateNewMessage' && upd.message_.can_be_deleted_) {
// if the message meets certain criteria, send a command to telegram-cli to
// delete it
tg.send({
'ID': 'DeleteMessages',
'chat_id_': upd.message_.chat_id_,
'message_ids_': [ upd.message_.id_ ]
});
}
}
launchTelegram(); // just launch these gizmos
We activate JSON mode passing --json key. telegram-cli appends underscore to all fields in objects. See all available methods in full schema.
I'm making a HTTP request and listen for "data":
response.on("data", function (data) { ... })
The problem is that the response is chunked so the "data" is just a piece of the body sent back.
How do I get the whole body sent back?
request.on('response', function (response) {
var body = '';
response.on('data', function (chunk) {
body += chunk;
});
response.on('end', function () {
console.log('BODY: ' + body);
});
});
request.end();
Over at https://groups.google.com/forum/?fromgroups=#!topic/nodejs/75gfvfg6xuc, Tane Piper provides a good solution very similar to scriptfromscratch's, but for the case of a JSON response:
request.on('response',function(response){
var data = [];
response.on('data', function(chunk) {
data.push(chunk);
});
response.on('end', function() {
var result = JSON.parse(data.join(''))
return result
});
});`
This addresses the issue that OP brought up in the comments section of scriptfromscratch's answer.
I never worked with the HTTP-Client library, but since it works just like the server API, try something like this:
var data = '';
response.on('data', function(chunk) {
// append chunk to your data
data += chunk;
});
response.on('end', function() {
// work with your data var
});
See node.js docs for reference.
In order to support the full spectrum of possible HTTP applications, Node.js's HTTP API is very low-level. So data is received chunk by chunk not as whole.
There are two approaches you can take to this problem:
1) Collect data across multiple "data" events and append the results
together prior to printing the output. Use the "end" event to determine
when the stream is finished and you can write the output.
var http = require('http') ;
http.get('some/url' , function (resp) {
var respContent = '' ;
resp.on('data' , function (data) {
respContent += data.toString() ;//data is a buffer instance
}) ;
resp.on('end' , function() {
console.log(respContent) ;
}) ;
}).on('error' , console.error) ;
2) Use a third-party package to abstract the difficulties involved in
collecting an entire stream of data. Two different packages provide a
useful API for solving this problem (there are likely more!): bl (Buffer
List) and concat-stream; take your pick!
var http = require('http') ;
var bl = require('bl') ;
http.get('some/url', function (response) {
response.pipe(bl(function (err, data) {
if (err) {
return console.error(err)
}
data = data.toString() ;
console.log(data) ;
}))
}).on('error' , console.error) ;
The reason it's messed up is because you need to call JSON.parse(data.toString()). Data is a buffer so you can't just parse it directly.
If you don't mind using the request library
var request = require('request');
request('http://www.google.com', function (error, response, body) {
if (!error && response.statusCode == 200) {
console.log(body) // Print the google web page.
}
})
If you are dealing with non-ASCII contents(Especially for Chinese/Japanese/Korean characters, no matter what encoding they are), you'd better not treat chunk data passed over response.on('data') event as string directly.
Concatenate them as byte buffers and decode them in response.on('end') only to get the correct result.
// Snippet in TypeScript syntax:
//
// Assuming that the server-side will accept the "test_string" you post, and
// respond a string that concatenates the content of "test_string" for many
// times so that it will triggers multiple times of the on("data") events.
//
const data2Post = '{"test_string": "swamps/沼泽/沼澤/沼地/늪"}';
const postOptions = {
hostname: "localhost",
port: 5000,
path: "/testService",
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Content-Length': Buffer.byteLength(data2Post) // Do not use data2Post.length on CJK string, it will return improper value for 'Content-Length'
},
timeout: 5000
};
let body: string = '';
let body_chunks: Array<Buffer> = [];
let body_chunks_bytelength: number = 0; // Used to terminate connection of too large POST response if you need.
let postReq = http.request(postOptions, (res) => {
console.log(`statusCode: ${res.statusCode}`);
res.on('data', (chunk: Buffer) => {
body_chunks.push(chunk);
body_chunks_bytelength += chunk.byteLength;
// Debug print. Please note that as the chunk may contain incomplete characters, the decoding may not be correct here. Only used to demonstrating the difference compare to the final result in the res.on("end") event.
console.log("Partial body: " + chunk.toString("utf8"));
// Terminate the connection in case the POST response is too large. (10*1024*1024 = 10MB)
if (body_chunks_bytelength > 10*1024*1024) {
postReq.connection.destroy();
console.error("Too large POST response. Connection terminated.");
}
});
res.on('end', () => {
// Decoding the correctly concatenated response data
let mergedBodyChunkBuffer:Buffer = Buffer.concat(body_chunks);
body = mergedBodyChunkBuffer.toString("utf8");
console.log("Body using chunk: " + body);
console.log(`body_chunks_bytelength=${body_chunks_bytelength}`);
});
});
How about HTTPS chunked response? I've been trying to read a response from an API that response over HTTPS with a header Transfer-Encoding: chunked. Each chunk is a Buffer but when I concat them all together and try converting to string with UTF-8 I get weird characters.