Hi i found this post Does Ant Media Server has live rewind feature? and it is working great, but does anyone know how to limit the duration of the rewind to a specific value.
I try to make a 24h rewind for a 24/7 stream.
Thanks for reading.
You have 2 options for this requirement. Let me explain step by step:
You can modify Segment list size and Segment Duration parameters according to your requirements.
Segment duration means that .ts chunk duration. It's default 2.
Segment list size means .ts chunk size on .m3u8 file. It's default 5. So you will have 5 .ts chunks in m3u8 file.
Let's skip to your requirements. You can calculate your 24h with below calculation. You need to have 24h with segment duration * segment list size.
Let's change the segment duration to 4 seconds.
24h(24 * 60 * 60 = 86400 seconds) = 4 seconds * 21600
So that you need to use as below:
Segment duration: 4
Segment list size: 21600
I tried it on my test environment, it's working for 2-3 minutes. I'm expecting should work 24h as well. Please keep settings as below:
settings.hlsPlayListType=
settings.hlsflags=delete_segments
settings.deleteHLSFilesOnEnded=true
on app settings as well. Please test it first.
You can restart your stream every 24h, so that Ant Media Server will remove chunks. It will start again.
Related
Current wrk configuration allows sending continuous requests for seconds (duration parameter).
Is there a way to use wrk to send requests and then exit.
My use case: I want to create large number of threads + connections (e.g. 1000 threads with 100 connections per thread) and send instantaneous bursts towards the server.
You can do it with LUA script:
local counter = 1
function response()
if counter == 100 then
wrk.thread:stop()
end
counter = counter + 1
end
Pass this script with -s command line parameter.
I make changes to wrk to introduce new knobs. Let me know if anyone is interested in the patch and I could post it.
I added a -r to send exactly requests and bail out.
Artem,
I have this code-change in my fork:
https://github.com/bhakta0007/wrk
I have a Firestore query that to me seems like it should be operating rather quickly, but instead it is taking 10 - 30 seconds per round trip. I'm using the Firebase ios sdk version 4.5.0 for Swift 4, Xcode version: 9.1. My Firestore pod is also 4.5.0.
I have a Firestore collection called locations with 3,500 location documents in it. My query gets all location documents from the locations collection where the field, "geohash_6," is equal to "9q8yyq." To me that seems like it should be a rather simple query to execute, but it's taking a really long time each time I call it.
The problem is, I need to call this query for 5 different geohash each update. I'm hoping to have a quicker load time to the UI than 10 seconds per query and definitely quicker than 30. Is there anything I can do to speed this up? It seems even if I were to use elasticsearch or similar search/indexing service, serving up the full firestore results identified by the search service would still take 10 - 30 seconds.
Thanks!
Here is my actual query: Firestore.firestore().collection("locations").whereField("geohash_6", isEqualTo: "9q8yyq")
Here is a debug log I had it output so I can see the times of a few consecutive requests:
31 seconds:
load: running updateLocationStore()
load: going out for locations in 9q8yyq at 2017-11-28T14:20:35
load: back with locations in 9q8yyq at 2017-11-28T14:21:06
13 seconds:
load: running updateLocationStore()
load: going out for locations in 9q8yyw at 2017-11-28T14:21:06
load: back with locations in 9q8yyw at 2017-11-28T14:21:19
14 seconds:
load: running updateLocationStore()
load: going out for locations in 9q8yyr at 2017-11-28T14:21:19
load: back with locations in 9q8yyr at 2017-11-28T14:21:33
12 seconds:
load: running updateLocationStore()
load: going out for locations in 9q8yyx at 2017-11-28T14:21:33
load: back with locations in 9q8yyx at 2017-11-28T14:21:45
Using graphite/Grafana to record the sizes of all collections in a mongodb instance. I wrote a simple (WIP) python script to do so:
#!/usr/bin/python
from pymongo import MongoClient
import socket
import time
statsd_ip = '127.0.0.1'
statsd_port = 8125
# create a udp socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
client = MongoClient(host='12.34.56.78', port=12345)
db = client.my_DB
# get collection list each runtime
collections = db.collection_names()
sizes = {}
# main
while (1):
# get collection size per name
for collection in collections:
sizes[collection] = db.command('collstats', collection)['size']
# write to statsd
for size in sizes:
MESSAGE = "collection_%s:%d|c" % (size, sizes[size])
sock.sendto(MESSAGE, (statsd_ip, statsd_port))
time.sleep(60)
This properly shows all of my collection sizes in grafana. However, I want to get a rate of change on these sizes, so I build the following graphite query in grafana:
derivative(statsd.myHost.collection_myCollection)
And the graph shows up totally blank. Any ideas?
FOLLOW-UP: When selecting a time range greater than 24h, all data similarly disappears from the graph. Can't for the life of me figure out that one.
Update: This was due to the fact that my collectd was configured to send samples every second. The statsd plugin for collectd, however, was receiving data every 60 seconds, so I ended up with None for most data points.
I discovered this by checking the raw data in Graphite by appending &format=raw to the end of a graphite-api query in a browser, which gives you the value of each data point as a comma-separated list.
The temporary fix for this was to surround the graphite query with keepLastValue(60). This however creates a stair-step graph, as the value for each None (60 values) becomes the last valid value within 60 steps. Graphing a derivative of this then becomes a widely spaced sawtooth graph.
In order to fix this, I will probably go on to fix the flush interval on collectd or switch to a standalone statsd instance and configure as necessary from there.
I would like to setup a dialplan to play audio at certain durations.
For example..
play an Audio file to say that the call duration is one minute, then play another file stating two minutes, then another saying ten minutes.
For that you can use goto and function RAND
pro-sip*CLI> core show function RAND
-= Info about function 'RAND' =-
[Synopsis]
Choose a random number in a range.
[Description]
Choose a random number between <min> and <max>. <min> defaults to '0', if
not specified, while <max> defaults to 'RAND_MAX' (2147483647 on many
systems).
Example: Set(junky=${RAND(1,8)}); Sets junky to a random number between
1 and 8, inclusive.
[Syntax]
RAND([min][,max])
[Arguments]
Not available
[See Also]
Not available
You also can just name files as 1.wav,2.wav etc and use playback(${RAND(1,2)} or you can add that to musiconhold folder and play random MOH
guys!
I'm developing an online auction with time limit.
The ending time period is only for one opened auction.
After logging into the site I show the time left for the open auction. The time is calculated in this way:
EndDateTime = Date and Time of end of auction;
DateTime.Now() = current Date and Time
timeLeft= (EndDateTime - DateTime.Now()).Seconds().
In javascript, I update the time left by:
timeLeft=timeLeft-1
The problem is that when I login from different browsers at the same time the browsers show a different count down.
Help me, please!
I guess there will always be differences of a few seconds because of the server processing time and the time needed to download the page.
The best way would be to actually send the end time to the browser and calculate the time remaining in javascript. That way the times should be the same (on the same machine of course).
Roman,
I had a little look at eBay (they know a thing or two about this stuff :)) and noticed that once the item is inside the last 90 seconds, a GET request gets fired every 2 seconds to update the variables in the javascript via a json response. you can look at this inside firebug/fiddler to see what it does.
here is an example of the json it pulls down:
{
"ViewItemLiteResponse":{
"Item":[
{
"IsRefreshPage":false,
"ViewerItemRelation":"NONE",
"EndDate":{
"Time":"12:38:48 BST",
"Date":"01 Oct, 2010"
},
"LastModifiedDate":1285932821000,
"CurrentPrice":{
"CleanAmount":"23.00",
"Amount":23,
"MoneyStandard":"£23.00",
"CurrencyCode":"GBP"
},
"IsEnded":false,
"AccessedDate":1285933031000,
"BidCount":4,
"MinimumToBid":{
"CleanAmount":"24.00",
"Amount":24,
"MoneyStandard":"£24.00",
"CurrencyCode":"GBP"
},
"TimeLeft":{
"SecondsLeft":37,
"MinutesLeft":1,
"HoursLeft":0,
"DaysLeft":0
},
"Id":160485015499,
"IsFinalized":false,
"ViewerItemRelationId":0,
"IsAutoRefreshEnabled":true
}
]
}
}
You could do something similar inside your code.
[edit] - on further looking at the eBay code, altho it only runs the intensive GET requests in the last 90 seconds, the same json as above is added when the page is initially loaded as well. Then, at 3 mins or so, the GET request is run every 10 seconds. therefore i assume the same javascript is run against that structure whether it be >90 seconds or not.
This may be a problem with javascript loading at different speeds,
or the setInterval will trigger at slightly different times depending on the loop
i would look into those two