Sony ZR5 Audio Control API - sony-audio-control-api

How i can send to my Sony ZR5 audio stream like http://air.radiorecord.ru:805/chil_320 using api?
Thank you.

As this device have build-in chromecast engine, you can too use it to stream url. ATM I m using python with this code.
import time
import pychromecast
name = ''
chromecasts = pychromecast.get_chromecasts()
for cc in chromecasts:
print (cc.device.friendly_name)
name = cc.device.friendly_name
cast = next(cc for cc in chromecasts if cc.device.friendly_name == name)
cast.wait()
print(cast.device)
print(cast.status)
mc = cast.media_controller
mc.play_media('https://c1icy.prod.playlists.ihrhls.com/7053_icy', 'audio/mp3')
mc.block_until_active()
print(mc.status)
mc.pause()
time.sleep(5)
mc.play()

Your only option is to use DLNA, and it will not work for all streams.
I have had best success then using IP number and not the hostname.

Related

HTTPRequest roblox

i'm currently making a roblox whitelist system and it's almost finished but i need 1 thing more i scripted it and its not work (code below) i didn't found nothing to fix what i have (script and screenshoot of error below), thanks.
local key = 1
local HttpService = game:GetService("HttpService")
local r = HttpService:RequestAsync({
Url = "https://MyWebsiteUrl.com/check.php?key="..key,
Method = "GET"
})
local i = HttpService:JSONDecode(r.Body)
for n, v in pairs(i) do
print(tostring(n)..", "..tostring(v))
end
I assume the website that you are using to validate the key
returns the response in raw if so then
local key = 1
local HttpService = game:GetService("HttpService")
local r = HTTPService:GetAsync("https://MyWebsiteUrl.com/check.php?key="..key)
local response = JSON:Decode(r)
print(response)
I think this is because you tried to concat a string (the url) with a number (the key variable) try to make the key a string

How do I find the complete list of url-paths within a website for scraping?

Is there a way I can use python to see the complete list of url-paths for a website I am scraping?
The structure of the url doesn't change just the paths:
https://www.broadsheet.com.au/{city}/guides/best-cafes-{area}
Right now I have a function that allows me to define {city} and {area} using an f-string literal but I have to do this manually. For example: city = melbourne and area = fitzroy.
I'd like to try and make the function iterate through all available paths for me but I need to work out how to get the complete list of paths.
Is there a way a scraper can do it?
You can parse the sitemap for the required URLs, for example:
import requests
from bs4 import BeautifulSoup
url = 'https://www.broadsheet.com.au/sitemap'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
for loc in soup.select('loc'):
if not loc.text.strip().endswith('/guide'):
continue
soup2 = BeautifulSoup(requests.get(loc.text).content, 'html.parser')
for loc2 in soup2.select('loc'):
if '/best-cafes-' in loc2.text:
print(loc2.text)
Prints:
https://www.broadsheet.com.au/melbourne/guides/best-cafes-st-kilda
https://www.broadsheet.com.au/melbourne/guides/best-cafes-fitzroy
https://www.broadsheet.com.au/melbourne/guides/best-cafes-balaclava
https://www.broadsheet.com.au/melbourne/guides/best-cafes-preston
https://www.broadsheet.com.au/melbourne/guides/best-cafes-seddon
https://www.broadsheet.com.au/melbourne/guides/best-cafes-northcote
https://www.broadsheet.com.au/melbourne/guides/best-cafes-fairfield
https://www.broadsheet.com.au/melbourne/guides/best-cafes-ascot-vale
https://www.broadsheet.com.au/melbourne/guides/best-cafes-west-melbourne
https://www.broadsheet.com.au/melbourne/guides/best-cafes-flemington
https://www.broadsheet.com.au/melbourne/guides/best-cafes-windsor
https://www.broadsheet.com.au/melbourne/guides/best-cafes-kensington
https://www.broadsheet.com.au/melbourne/guides/best-cafes-prahran
https://www.broadsheet.com.au/melbourne/guides/best-cafes-essendon
https://www.broadsheet.com.au/melbourne/guides/best-cafes-pascoe-vale
https://www.broadsheet.com.au/melbourne/guides/best-cafes-albert-park
https://www.broadsheet.com.au/melbourne/guides/best-cafes-port-melbourne
https://www.broadsheet.com.au/melbourne/guides/best-cafes-armadale
https://www.broadsheet.com.au/melbourne/guides/best-cafes-brighton
https://www.broadsheet.com.au/melbourne/guides/best-cafes-malvern
https://www.broadsheet.com.au/melbourne/guides/best-cafes-malvern-east
https://www.broadsheet.com.au/melbourne/guides/best-cafes-glen-iris
https://www.broadsheet.com.au/melbourne/guides/best-cafes-camberwell
https://www.broadsheet.com.au/melbourne/guides/best-cafes-hawthorn-east
https://www.broadsheet.com.au/melbourne/guides/best-cafes-brunswick-east
https://www.broadsheet.com.au/melbourne/guides/best-cafes-bentleigh
https://www.broadsheet.com.au/melbourne/guides/best-cafes-coburg
https://www.broadsheet.com.au/melbourne/guides/best-cafes-richmond
https://www.broadsheet.com.au/melbourne/guides/best-cafes-bentleigh-east
https://www.broadsheet.com.au/melbourne/guides/best-cafes-collingwood
https://www.broadsheet.com.au/melbourne/guides/best-cafes-elwood
https://www.broadsheet.com.au/melbourne/guides/best-cafes-abbotsford
https://www.broadsheet.com.au/melbourne/guides/best-cafes-south-yarra
https://www.broadsheet.com.au/melbourne/guides/best-cafes-yarraville
https://www.broadsheet.com.au/melbourne/guides/best-cafes-thornbury
https://www.broadsheet.com.au/melbourne/guides/best-cafes-west-footscray
https://www.broadsheet.com.au/melbourne/guides/best-cafes-footscray
https://www.broadsheet.com.au/melbourne/guides/best-cafes-south-melbourne
https://www.broadsheet.com.au/melbourne/guides/best-cafes-hawthorn
https://www.broadsheet.com.au/melbourne/guides/best-cafes-carlton-north
https://www.broadsheet.com.au/melbourne/guides/best-cafes-brunswick
https://www.broadsheet.com.au/melbourne/guides/best-cafes-carlton
https://www.broadsheet.com.au/melbourne/guides/best-cafes-elsternwick
https://www.broadsheet.com.au/sydney/guides/best-cafes-bronte
https://www.broadsheet.com.au/sydney/guides/best-cafes-coogee
https://www.broadsheet.com.au/sydney/guides/best-cafes-rosebery
https://www.broadsheet.com.au/sydney/guides/best-cafes-ultimo
https://www.broadsheet.com.au/sydney/guides/best-cafes-enmore
https://www.broadsheet.com.au/sydney/guides/best-cafes-dulwich-hill
https://www.broadsheet.com.au/sydney/guides/best-cafes-leichhardt
https://www.broadsheet.com.au/sydney/guides/best-cafes-glebe
https://www.broadsheet.com.au/sydney/guides/best-cafes-annandale
https://www.broadsheet.com.au/sydney/guides/best-cafes-rozelle
https://www.broadsheet.com.au/sydney/guides/best-cafes-paddington
https://www.broadsheet.com.au/sydney/guides/best-cafes-balmain
https://www.broadsheet.com.au/sydney/guides/best-cafes-erskineville
https://www.broadsheet.com.au/sydney/guides/best-cafes-willoughby
https://www.broadsheet.com.au/sydney/guides/best-cafes-bondi-junction
https://www.broadsheet.com.au/sydney/guides/best-cafes-north-sydney
https://www.broadsheet.com.au/sydney/guides/best-cafes-bondi
https://www.broadsheet.com.au/sydney/guides/best-cafes-potts-point
https://www.broadsheet.com.au/sydney/guides/best-cafes-mosman
https://www.broadsheet.com.au/sydney/guides/best-cafes-alexandria
https://www.broadsheet.com.au/sydney/guides/best-cafes-crows-nest
https://www.broadsheet.com.au/sydney/guides/best-cafes-manly
https://www.broadsheet.com.au/sydney/guides/best-cafes-woolloomooloo
https://www.broadsheet.com.au/sydney/guides/best-cafes-newtown
https://www.broadsheet.com.au/sydney/guides/best-cafes-vaucluse
https://www.broadsheet.com.au/sydney/guides/best-cafes-chippendale
https://www.broadsheet.com.au/sydney/guides/best-cafes-marrickville
https://www.broadsheet.com.au/sydney/guides/best-cafes-redfern
https://www.broadsheet.com.au/sydney/guides/best-cafes-camperdown
https://www.broadsheet.com.au/sydney/guides/best-cafes-darlinghurst
https://www.broadsheet.com.au/adelaide/guides/best-cafes-goodwood
https://www.broadsheet.com.au/perth/guides/best-cafes-northbridge
https://www.broadsheet.com.au/perth/guides/best-cafes-leederville
You are essentially trying to create a spider just like search engines do. So, why not use one that already exists? It's free up to 100 daily queries. You will have to set up a Google Custom Search and define a search query.
get your API key from here: https://developers.google.com/custom-search/v1/introduction/?apix=true
define a new search engine: https://cse.google.com/cse/all using URL https://www.broadsheet.com.au/
Click public URL and copy the part from cx=123456:abcdef
place your API key and the cx-part in below URL google
adjust the below query to get the results for different cities. I set it up to find results for Melbourne but you can use a placeholder there easily and format the string.
import requests
google = 'https://www.googleapis.com/customsearch/v1?key={your_custom_search_key}&cx={your_custom_search_id}&q=site:https://www.broadsheet.com.au/melbourne/guides/best+%22best+cafes+in%22+%22melbourne%22&start={}'
results = []
with requests.Session() as session:
start = 1
while True:
result = session.get(google.format(start)).json()
if 'nextPage' in result['queries'].keys():
start = result['queries']['nextPage'][0]['startIndex']
print(start)
else:
break
results += result['items']

Pause/Stop AjaxDataSource Bokeh Stream

I am able to set up my graph for streaming just fine. Here's the initialization:
self.data_source = AjaxDataSource(data_url='my_route',
polling_interval=1000, mode='append', max_size=300)
Now I want to 'pause' the polling of the AjaxDataSource. I couldn't find a way to do this in the documentation. I'm NOT running a bokeh server, so bokeh server solutions I am unable to use.
I came up with one possible solution: just return empty data set to the function that is appending the data via AjaxDataSource. So in the example above, the my_route function would look something like this:
def my_route:
if not self.is_paused:
data = normal_data_to_graph
else:
data = []
return data
Once you set the polling_interval = None in Python, it will not request it. In CustomJS, you can start the paused request. Here, the source is an AJaxDataSource instance.
source.polling_interval = 1000; // the interval you want
source.intialized = false;
source.setup();

Use R serial package to extract information stored in a Trovan GR250 RFID reader

I try to access to informations stored inside a Trovan reader using R serial package via a serial port. Connexion seems to be effective since the reader red led is shortly activated when serialConnection function is run but
read.serialConnection function give an empty string instead of the expected tag code. Have someone any ideas ? below a link to the reader documentation and the R script.
Many thanks
http://www.vantro.biz/GR-250%20Computer%20Interface%20Manual.pdf
trovan<-serialConnection(NA,port = "com1", mode = "9600,N,8,1",translation = 'cr', handshake = 'xonxoff')
open(trovan)
res<-read.serialConnection(trovan)
close(trovan)
res
[1] " "
library(serial)
library(radio)###have to add some waiting time between each step
trovan<-serialConnection("get_rfid", port = "COM4", mode ="9600,N,8,1", newline = 1, translation = "cr", handshake = "xonxoff") ##windows os
open(trovan)
wait(1)
write.serialConnection(trovan,"N")
wait(2.5)
res<-read.serialConnection(trovan)
close(trovan)

Unable to read AT Command Response

I test some basic AT Command in Hyperterminal. The GSM modem response as per my command too. But problem is that it shows me the unreadable text. I use the following code :
AT
OK
AT+CUSD=1,"*247#",15
OK
+CUSD: 1,"0062004B006100730068000A00310020004D0032004D0020005400720061006E007300
6600650072000A0032002000440069007300620075007200730065000A00330020004D0079002000
62004B006100730068000A0034002000480065006C0070006C0069006E0065000A",72
AT+CUSD=1,"1",15
OK
AT+CUSD=1,"*247#",15 command should display
Menu 1
Menu 2
Menu 3
Something like that. But it displayed the hexadecimal code which it unreadable. How can I get plain text ? Can anyone help Me ?
Judging by information provided. Where when you send the +CUSD request with DCS (Data Coding Scheme) of 15. And the response from the Bkash service with DCS of 72. It looks like your modem does not support the encoding specified in the DCS from Bkash.
I found is fairly similar question and solution to this question. Try and ensure that +CSCS is set to something like IRA or GSM and see what happens then with your +CUSD responses.
Use the following functions to decode "UCS2" response data:
public static String HexStr2UnicodeStr(String strHex)
{
byte[] ba = Hex2ByteArray(strHex);
return HexBytes2UnicodeStr(ba);
}
public static String HexBytes2UnicodeStr(byte[] ba)
{
var strMessage = Encoding.BigEndianUnicode.GetString(ba, 0, ba.Length);
return strMessage;
}
for example:
String str2 = SmsEngine.HexStr2UnicodeStr("0062004B006100730068000A00310020004D0032004D0020005400720061006E0073006600650072000A0032002000440069007300620075007200730065000A00330020004D007900200062004B006100730068000A0034002000480065006C0070006C0069006E0065000A");
// str2 = "bKash\n1 M2M Transfer\n2 Disburse\n3 My bKash\n4 Helpline\n"
Please also check UnicodeStr2HexStr()
Hi this code is something called PDU (Protocol Data Unit). To decode it is not straight forward. you need to understand the structure first.

Resources