I am using nginx with the rtmp module and I was wondering is there a way to get the number of live viewers that are watching the streams.
Thank you in advanced for the help.
Take a look at hlswatch: https://github.com/faryon93/hlswatch
I'm currently considering it for a project. I've never seen it work though, so no guarantees. It's the only thing I've seen like it, I figure most people must build their own system.
Related
I'm used to git and command line stuff, but working on a wordpress site freelancing. I have FTP access, but the site I'm working on has like 16,000 files just in wp-content. Is there a way to automatically only upload changed files? I'm using Filezilla and there's an option to do that, but going through 16,000 files takes hours anyway. I know I could use git and do things manually, but that's a pain.
I'm open to suggestions outside of FTP if there's any easier way in general for wordpress dev.
Since you're bound to FTP¹, your options are quite limited. There are free (in limited capacity) services to deploy to SFTP² via git. Some examples: DeployBot, Buddy.works, DeployHQ, etc. There is also Beanstalk, which I've used in the past and it worked rather well, but the free account is limited to 100MB (which would obviously not work for your situation, and it sounds like the client is too cheap to buy a paid account). It is a bit odd to me to store media library in git, but that is another topic and I understand your dilemma.
¹I would highly recommend using the insecurities of FTP as an argument to try to convince the client to switch to... literally anything else.
²Not certain if these services support FTP (as opposed to SFTP). You would probably need to ask, but they may not given the insecurity of FTP.
EDIT - There may also be some open source options like this (albeit old) solution: https://github.com/mehedi101/ftploy (purely as an example; there are others, but they appear to vary in complexity and I have not tried them)
This is the graph of one of my sites https://www.alebalweb-blog.com, first line of firefox development tools -> Network, and I'm not sure that the blocked and waiting entries are "normal".
Waiting, I suspect it's the server's fault, it's a small vps on Vultr - Ubuntu 18.04, the other day I updated to php7-4-fpm and I haven't activated opcache, memcached, acpu or anything else yet, because (unfortunately) my sites are small, less than a thousand visits a day, and I don't know if it makes sense to activate chace systems, maybe they also affect indexing and positioning on search engines?
Even if yandex and bing give a lot of work for my little server... and maybe just them would take advantage from the cache?
Blocked, it is more confusing, I'm not sure it's me, everything happens before you get to my server? Maybe it's Vultr's fault? Maybe namesilo's fault? (where domains are registered) Maybe mine, some apache configuration or something else? Maybe they're normal values? I have no idea.
Can anyone help me understand if they are normal values? And if they are not, to understand how I can improve?
-------------------------update------------------------
I have read the pages you have suggested to me, even they do not seem to have understood much or found a solution....
I did some things on my little server, like: blocked yandex, enabled opcache, installed memcached.
The intent is to stabilize, to begin to understand something.
I have done many other tests these days, and I have seen results like these:
This is another site, but it is on the same server, the one highlighted is matomo (statistics), the tracking javascript script, is in a sub-domain, but always on the same server.
The difference is enormous, and the tests were done within seconds of each other.
So at this point maybe the question is: do you have any suggestions on what else I can do to start understanding something?
At least to understand if to create these timings is me, if it is my server, the scripts of my sites, the browsers, the connection or what else.
None of what you've posted looks very bad, but your service is sometimes taking > 6s to respond to the initial connection request. There are probably a lot of small things wrong that you can fix, I would start with looking at this question which addresses the same problem I'm seeing with your site.
The timing looks bit large as for me.
Seems the server is not responding during 150ms (blocked) especially on main page.
Then takes up to 150ms for TLS setup, 200ms to load content etc.
But this is not stable.
Sometimes it took about 800ms to receive homepage, sometimes the whole thing took less then 200ms.
Most likely it is server issues (as your virtual server share physical hardware machine with other servers).
And just for reference:
What does "Blocked" really mean in the Firefox developer tools Network monitoring?
Also, there is some general things to consider as troubleshoot:
I suggest to create local (localhost) version of the site, then:
Check time actually required to render homepage (inside server log)
Temporary remove gzip compression
Temporary remove https
Temporary remove output buffering in php (hope your code does not need it)
Check if any "post processing" content hooks are active in php
i try to find a good combination of libraries for managing a real-time communication (client/server) using Haxe (only Haxe, not openfl or other framework base on Haxe) targeting flash (swf) for the client and no preference for the server except don't use neko.
The goal is to make a simple tchat and put a display representation of all clients on an aera. Each client can move his representation in this area, and the other sees the movement.
I find some Lib to make this :
https://github.com/soywiz/haxe-ws
https://github.com/MattTuttle/hxnet
haxe-js-kit
But I'm not sure of the best way to adopt.
Do you have any suggestion/remarks/tips to choose the better way ?
Disclaimer: I wrote the library that I am sharing here.
My somewhat new library mphx may be able to help you. It can manage 'rooms' of connections, allows client to server and server to client messaging in the form of events, and best of all, is cross platform. It also works in the web with websockets.
It was originally an extention of HxNet, however I wanted it to be easier to use. Connecting and sending a 'message' with data just takes a few lines.
I have a few examples in the github repository, the simplest being the 'basic' example. One of your requests you have is that it doesn't rely on one of the big libraries (open fl, etc) and mphx doesn't. The basic example proves that, and only runs in terminal. That being said, it can be used with haxeflixel, for that you can see the other examples.
It sounds like your main goal is to have simple, graphic multiplayer. For that you can look at the 'movement' haxeflixel example.
Documentation is still a little skim, and the code is alpha, so it might change or break. That can probably be said for most of the library's you listed though. The best way to install it is like this
haxelib git mphx https://github.com/5Mixer/mphx.git
That will not install the examples though. To run them, either download the repository as a zip, or just git clone it, and go into the examples folder.
Library: https://github.com/5Mixer/mphx
Old video's I made. A little outdated, most likely.
Video 1: https://www.youtube.com/watch?v=07J0wLXwH0g
Video 2: https://www.youtube.com/watch?v=MUx2CUtsnTU
I would like to develop a Network Inventory application that works on any operating system.
Reports on every possible resource attacehd to a network.
Reports all pertinent details of hardware and software.
Thats (and i hate to use the phrase) my "End Game".
However I am running before i can crawl here.
I have no experience of this type of development, e.g. discovering a computers hardware and software settings.
I've spent almost two weeks googling and come up short! :-(.
So I am turning to you to ask these questions:-
My first step is to find an existing open source project i can incorporate into my own code that extracts the fine grained details i am after, e.g. EVERYTHING there is to know about the hardaware and software on a single machine.
Does this project exist? or do i have to develop that first?
Have i got to write all this in C?
I am guessing getting this information about a computer is going to be easier than for printers, scanners, routers etc... e.g. everything else you would find attached to a network.
Once i have access to a single computers details i then need to investigate how i can traverse an entire newtork of printers, scanners, routers, load balancers, switches, firewalls, workstations, servers, storeage devices, laptops, monitors, the list goes on and on
One problem i have is i dont have a 1000 machine newtork to play on!
Is there any such resource available on theinternet? (is that a silly question?)
Anywho, if you dont ask you wont find out!
One aspect iam really looking forward to finding out how to travers the entire network,
should i be using TCP/IP for this?
Whats a good site, blog, usergorup, book for TCP/IP development?
How do i go about getting through firewalls?
How many questions can i ask in one go? :-)
My previous question on this topic ended up with PYTHON being championed as the language/script to go with to develop this application in.
Having looked at a few PYTHON examples they all seemed to be related to WINDOWS networks
and interrogating Windows Management Instrumentation (WMI). I had the feeling you cant rely on whats in WMI, and even if you can that s no good for UNIX netwrks.
Surely there exist common code for extracting hardware and software details from a computer? Why cant i find it on the internet?
Pease help?
Theres no prizes though :-(
Thanks in advance
I would like to appologise if i have broken forum rules or not tried hard enough on my own before asking for assistance.
I just would like to start moving forward with this as its one of the best projects i have been involved with.
I am inspired by the many differnt number of challenges involved and that if i manage to produce a useful application at the end of it it would hopefully be extremely helpful to many people.
That sit
Thanks in advance
DD
as a software vendor of a discovery solution, I can just say: Respect, that you want to start a new one :-). Just in case you are interested in what it could look like: http://www.jdisc.com
Now to some of our experience:
Programming Language:
I wouldn't write it in C. Use Java or .NET. Those languages have great advantages when it comes to tracking down errors or problems. For instance, in Java (and I guess also in .NET), you can see the stack trace when something is failing. For some pieces of code (e.g. WMI access), you might need to use C++ or C (e.g. access to native APIs from Microsoft). Use a native interface or a COM bridge from Java. In .NET, it should even be easier to access the Windows APIs).
Devices:
well, network printers, router, and switches are actually easier to discover. They usually expose their information via SNMP. SNMP is pretty easy to use and pretty robust. Getting information from Windows (or even Unix) systems is a bit trickier. Protocols can be blocked, misconfigured, messed up... We had cases, where WMI was simply hanging when requesting data from a remote device.
Test Devices:
Since we are also a smaller company, we also do not have 1000 different devices to test with. But, there are some things that might help:
a) For SNMP devices use a SNMP simulator. We use MIMIC 9.0 from Gambit Solutions and we are pretty happy with it. You can import SNMP walks from network devices and simulate the device as if it would be in your network.
b) Secondly, use virtualization whenever possible. With VMware, you can install Windows, Linux, or even Solaris. We also use a project called GNS3 to emulate Cisco Routers, Firewalls or Juniper routers.
c)You can test the rest of the devices only, if you have a customer that helps you with testing and implementing new devices.
This are just some ideas to start with. But I have to tell you, that it is not trivial and it takes a lot of time....
Hope that you got some ideas to start with...
I don't know that it's open source, but we use Spiceworks (http://www.spiceworks.com) here as an IT management platform. You may get some use out of exploring that.
I am curious as to what others are using in this situation. I know a couple of the options that are out there like a memcached port or ScaleOutSoftware. The memcached ports don't seem to be actively worked on (correct me if I'm wrong). ScaleOutSoftware is too expensive for me (I don't doubt it is worth it). This is not to say that I don't want to hear about people using memcached or ScaleOutSoftware. I'm just stating what I "know" at this point.
So my question is basically this: for those of you ACTIVELY using distributed caching, what are you using, are you happy with it, and what should I look out for?
I am moving to two servers very soon...both will be at the same location. I use caching fairly heavily (but carefully) to reduce the load on my database server.
Edit: I downloaded Scaleout Software's solution. I've coded for it and it seems to work real well. I just have to decide if my wallet will part with the cash for it. :) Anyone have experiences good or bad with ScaleoutSoftware?
Edit Again: It's been a little while since I asked this? Any more thoughts on it? We ended up buying the solution from ScaleOutSoftware and have been happy with it, but I'm curious what others are doing.
Microsoft has a product pending code-named Velocity. It's still in CTP, and is moving slowly, but looks like it will be pretty good. We'll be beating it up in the near future to see how it handles what we want it to do (> 2 million read/writes per hour). Will post back with results.
There is a 100% native .NET, well documented open source (LGPL) project called Shared Cache. Looks like it is not yet mentioned on SO, but it's promising and should be able to do what most people expect from a distributed cache. It even supports different strategies like distributed or replicated caching etc.
I will update this post with more details as soon as I had a chance to try it on a real project.
We're currently using an incredibly simple cache that I wrote in a couple of hours, based on re-hosting the ASP.NET cache in a Windows Service (more info and source code here). I won't pretend it's anywhere near as optimised as something like Memcached but we were just looking for something simple and free until Velocity came along, and it's held up extremely well even under fairly heavy load.
It comes down to our personal preference for core components - i.e. ones that affect whether the site is available or not - that they are either (a) supported by a vendor with a history of rapid and high quality support, or (b) written by us so that if something goes wrong we can fix it quickly. Open source is all well and good, and indeed we do use some OSS, but if your site is offline then unfortunately newsgroups et al don't have a 1 hour SLA, and just because it's OSS doesn't mean you have the necessary understanding or ability to fix it yourself.
We are using the memcached port for Windows and we are very pleased with it. The enyim.com memcached client API is great and easy to work with. It's also open source, which is a big advantage, if you ask me.
We are now using this setup in a production web-app and it has helped a lot in improving its performance.
There's a great .NET wrapper/port found here on Codeplex. Awesomesauce!
We use memcached with the enyim library in a production environment (www.funda.nl). Works fine, very pleased with it, but we did notice a substantial raise in CPU use on the clients. Presumably due to the serializing/deserializing going on. We do around 1000 reads per second.
One tried and tested product by 100's of customers worldwide is NCache. Its
a feature rich product that lets you store session state in a redundant and highly available manner, lets you share data
within the enterprise as well as bridging for WAN communication essentially acting as a data fabric and lastly it lets you build an elastic caching tier so that when
your application scales, you can add servers to the cache and actually boost performance further.