DirectShow .Net AddSourceFilter Release file - directshow

I have an application which plays video using DirectShow.Net and also records video. When I try to record to a filename that has just been used for playback by DirectShow it fails as the file is still in use. Sometimes it will work but take anywhere from 5 - 60 seconds until the file is unlocked. Before the recording is attempted the playback graph has definitely been destroyed. The code for creating and destroying the graph is below. If I stop and start my application following playback I can record to the same filename and there are no file locking issues.
Can anyone advise how I can correctly release the source filter so the file is not locked?
Creating the graph
try
{
graphBuilder = (IGraphBuilder)new FilterGraph();
#if DEBUG
// "Connect to remote graph" in GraphEdit
rotEntry = new DsROTEntry(graphBuilder);
#endif
hr = graphBuilder.AddSourceFilter(filename, filename, out baseFilter);
DsError.ThrowExceptionForHR(hr);
vmr9 = (IBaseFilter)new VideoMixingRenderer9();
ConfigureVMR9InWindowlessMode();
hr = graphBuilder.AddFilter(vmr9, "Video Mixing Renderer 9");
DsError.ThrowExceptionForHR(hr);
FilterGraphTools.ConnectFilters(graphBuilder, baseFilter, "Output", vmr9, "VMR Input0", true);
}
Destroying the graph
if (vmr9 != null)
{
Marshal.ReleaseComObject(vmr9);
vmr9 = null;
windowlessCtrl = null;
}
if (graphBuilder != null)
{
// Remove and release all filters
FilterGraphTools.RemoveAllFilters(graphBuilder);
Marshal.ReleaseComObject(graphBuilder);
graphBuilder = null;
baseFilter = null;
}
#if DEBUG
if (rotEntry != null)
{
rotEntry.Dispose();
rotEntry = null;
}
#endif

Eventually the graph is a set of connected COM objects, and successful graph termination depends on correct release, without any leaked references. References that you might have left unreleased are leading to objects kept alive and possibly locking certain resources.
The best you can do is explicit termination/removal of individual objects:
Stop the graph
Remove all fitlers explicitly using IFilterGraph2.RemoveFilter
Use filter dependent methods calls to possibly terminate individual filters, such as by putting empty path to file source/sink filters
If even leak takes place, the graph should no longer reference resources. Note that you sometimes can also reuse filters if you are re-creating the graph.

Related

Microsoft's MPEG-2 demuxer filter - can I change an elementary stream pin's PID while the graph is running?

I'm working with multi-program UDP MPEG-2 TS streams that, -unfortunately- dynamically re-map their elementary stream PIDs at random intervals. The stream is being demuxed using Microsoft's MPEG-2 demultiplexer filter.
I'm using the PSI-Parser filter (an example filter included in the DirectShow base classes) in order to react to the PAT/PMT changes.
The code is properly reacting to the change, yet I am experiencing some odd crashes (heap memory corruption) right after I remap the Demuxer pins to their new ID's. (The re-mapping is performed inside the thread that is processing graph events, while the EC_PROGRAMCHANGED message is being processed).
The crash could be due to faulty code in my part, yet I have not found any reference that tells me if changing the pin PID mapping is safe while the graph is running.
Can anyone provide some info if this is operation is safe, and if it is not, what could I do to minimize capture disruption?
I managed to find the source code for a Windows CE version of the demuxer filter. Inspecting it, indeed, it seems that it is safe to remap a pin while the filter is running.
I also managed to find the source of my problems with the PSI-Parser filter.
When a new transport stream is detected, or the PAT version changes, the PAT is flushed, (all programs are removed, the table is re-parsed and repopulated).
There is a subtle bug within the CPATProcessor::flush() method.
//
// flush
//
// flush an array of struct: m_mpeg2_program[];
// and unmap all PMT_PIDs pids, except one: PAT
BOOL CPATProcessor::flush()
{
BOOL bResult = TRUE;
bResult = m_pPrograms->free_programs(); // CPrograms::free_programs() call
if(bResult == FALSE)
return bResult;
bResult = UnmapPmtPid();
return bResult;
}// flush
Here's the CPrograms::free_programs() implementation.
_inline BOOL free_programs()
{
for(int i= 0; i<m_ProgramCount; i++){
if(!HeapFree(GetProcessHeap(), 0, (LPVOID) m_programs[i] ))
return FALSE;
}
return TRUE;
}
The problem here is that the m_ProgramCount member is never cleared. So, -apart from reporting the wrong number of programs in the table after a flush (since it is updated incrementally for each program found in the table)-, the next time the table is flushed, it will try to release memory that was already released.
Here's my updated version that fixes the heap corruption errors:
_inline BOOL free_programs()
{
for(int i= 0; i<m_ProgramCount; i++){
if(!HeapFree(GetProcessHeap(), 0, (LPVOID) m_programs[i] ))
return FALSE;
}
m_ProgramCount = 0; // This was missing, next call will try to free memory twice
return TRUE;
}

IIS Worker process using 6gb RAM on web server with ASP.NET MVC web site

I have a web site running in its own Application Pool (IIS 8). Settings for the pool are default i.e. recycle every 29 hours.
Our web server only has 8gb RAM and I have noticed that the worker process for this web site regularly climbs to 6gb RAM and slows the server to a crawl. This is the only site currently on the web server.
I also have SQL Express 2016 installed as well. The site is using EF version 6.1.3.
The MVC site is very straightforward. It has a GETPDF controller which finds a row in a table, gets PDF info stored in a field then serves it back to the browser as follows :-
using (eBillingEntities db = new eBillingEntities())
{
try
{
string id = model.id;
string emailaddress = Server.HtmlEncode(model.EmailAddress).ToLower().Trim();
eBillData ebill = db.eBillDatas.ToList<eBillData>().Where(e => e.PURL == id && e.EmailAddress.ToLower().Trim() == emailaddress).FirstOrDefault<eBillData>();
if (ebill != null)
{
// update the 'Lastdownloaded' field.
ebill.LastDownloaded = DateTime.Now;
db.eBillDatas.Attach(ebill);
var entry = db.Entry(ebill);
entry.Property(en => en.LastDownloaded).IsModified = true;
db.SaveChanges();
// Find out from the config record whether the bill is stored in the table or in the local pdf folder.
//
Config cfg = db.Configs.ToList<Config>().Where(c => c.Account == ebill.Account).FirstOrDefault<Config>();
bool storePDFDataInEBillTable = true;
if (cfg != null)
{
storePDFDataInEBillTable = cfg.StorePDFDataInEBillDataTable;
}
// End of Modification
byte[] file;
if (storePDFDataInEBillTable)
{
file = ebill.PDFData;
}
else
{
string pathToFile = "";
if (string.IsNullOrEmpty(cfg.LocalPDFDataFolder))
pathToFile = cfg.LocalBackupFolder;
else
pathToFile = cfg.LocalPDFDataFolder;
if (!pathToFile.EndsWith(#"\"))
pathToFile += #"\";
pathToFile += ebill.PDFFileName;
file = System.IO.File.ReadAllBytes(pathToFile);
}
MemoryStream output = new MemoryStream();
output.Write(file, 0, file.Length);
output.Position = 0;
HttpContext.Response.AddHeader("content-disposition", "attachment; filename=ebill.pdf");
return new FileStreamResult(output, "application/pdf");
}
else
return View("PDFNotFound");
}
catch
{
return View("PDFNotFound");
}
Are there any memory leaks here?
Will the file byte array and the memory stream get freed up?
Also, is there anything else I need to do concerning clearing up the entity framework references?
If the code looks OK, where would be a good place to start looking?
Regards
Are there any memory leaks here?
No.
Will the file byte array and the memory stream get freed up?
Eventually, yes. But that may be the cause of your excessive memory use.
Also, is there anything else I need to do concerning clearing up the entity framework references?
No.
If the code looks OK, where would be a good place to start looking?
If this code is the cause of your high memory use, it's because you are loading files into memory. And you're loading two copies of each file in memory, once in a byte[] and copying to a MemoryStream.
There's no need to do that.
To eliminate the second copy of the file use the MemoryStream(byte[]) constructor instead of copying the bytes from the byte[] to an empty MemoryStream.
To eliminate the first copy in memory, you can stream the data into a temporary file that will be the target of your FileStreamResult, or initialize the FileStreamResult using a ADO.NET stream.
See https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sqlclient-streaming-support
If you go to ADO.NET streaming your DbContext, will need to be scoped to your Controller, instead of a local variable, which is a good practice in any case.
In addition to David's advice. I noticed that I was doing the following
**db.eBillDatas.ToList<eBillData>()**
therefore I was getting all the data from the database then fetching it again with the where clause.
I didn't notice the problem until the database started to fill up.
I removed that part and now the IIS worker processing is about 100mb.

Blackberry not creating a valid sqlite database

I have a very unusual problem.
I'm trying to create a simple database (6 tables, 4 of which only have 2 columns).
I'm using an in-house database library which I've used in a previous project, and it does work.
However with my current project there are occasional bugs. Basically the database isn't created correctly. It is added to the sdcard but when I access it I get a DatabaseException.
When I access the device from the desktop manager and try to open the database (with SQLite Database Browser v2.0b1) I get "File is not a SQLite 3 database".
UPDATE
I found that this happens when I delete the database manually off the sdcard.
Since there's no way to stop a user doing that, is there anything I can do to handle it?
CODE
public static boolean initialize()
{
boolean memory_card_available = ApplicationInterface.isSDCardIn();
String application_name = ApplicationInterface.getApplicationName();
if (memory_card_available == true)
{
file_path = "file:///SDCard/" + application_name + ".db";
}
else
{
file_path = "file:///store/" + application_name + ".db";
}
try
{
uri = URI.create(file_path);
FileClass.hideFile(file_path);
} catch (MalformedURIException mue)
{
}
return create(uri);
}
private static boolean create(URI db_file)
{
boolean response = false;
try
{
db = DatabaseFactory.create(db_file);
db.close();
response = true;
} catch (Exception e)
{
}
return response;
}
My only suggestion is keep a default database in your assets - if there is a problem with the one on the SD Card, attempt to recreate it by copying the default one.
Not a very good answer I expect.
Since it looks like your problem is that the user is deleting your database, just make sure to catch exceptions when you open it (or access it ... wherever you're getting the exception):
try {
URI uri = URI.create("file:///SDCard/Databases/database1.db");
sqliteDB = DatabaseFactory.open(myURI);
Statement st = sqliteDB.createStatement( "CREATE TABLE 'Employee' ( " +
"'Name' TEXT, " +
"'Age' INTEGER )" );
st.prepare();
st.execute();
} catch ( DatabaseException e ) {
System.out.println( e.getMessage() );
// TODO: decide if you want to create a new database here, or
// alert the user if the SDCard is not available
}
Note that even though it's probably unusual for a user to delete a private file that your app creates, it's perfectly normal for the SDCard to be unavailable because the device is connected to a PC via USB. So, you really should always be testing for this condition (file open error).
See this answer regarding checking for SDCard availability.
Also, read this about SQLite db storage locations, and make sure to review this answer by Michael Donohue about eMMC storage.
Update: SQLite Corruption
See this link describing the many ways SQLite databases can be corrupted. It definitely sounded to me like maybe the .db file was deleted, but not the journal / wal file. If that was it, you could try deleting database1* programmatically before you create database1.db. But, your comments seem to suggest that it was something else. Perhaps you could look into the file locking failure modes, too.
If you are desperate, you might try changing your code to use a different name (e.g. database2, database3) each time you create a new db, to make sure you're not getting artifacts from the previous db.

Finding duration of a video using directshowlib-2005

My asp.net(c#) method looks as follows:
static public bool GetVideoLength(string fileName, out double length)
{
DirectShowLib.FilterGraph graphFilter = new DirectShowLib.FilterGraph();
DirectShowLib.IGraphBuilder graphBuilder;
DirectShowLib.IMediaPosition mediaPos;
length = 0.0;
try
{
graphBuilder = (DirectShowLib.IGraphBuilder)graphFilter;
graphBuilder.RenderFile(fileName, null);
mediaPos = (DirectShowLib.IMediaPosition)graphBuilder;
mediaPos.get_Duration(out length);
return true;
}
catch
{
return false;
}
finally
{
mediaPos = null;
graphBuilder = null;
graphFilter = null;
}
}
I got the duration with the above method. But my problem is i can't delete the physical file
after my operation. I used
File.Delete(FilePath);
While performing this action i got an exception as follows:
"The process cannot access the file because it is being used by another process."
My Os is windows 7(IIS 7)
Any one please help me to sort this out?
I've got no experience in coding directshow apps in C#, but plenty of experience in C++.
DirectShow is based on a technology called COM - which uses reference counting to tell it when an object is in use.
It would use a COM object to represent the IGraphBuilder for example.
In C++, we would have to deconstruct the graph, by removing all its filters, then release the graph.
I understand that C# has its own garbage collection etc., but unless you explicitly release the objects you use, they'll remain in memory.
It seems from the code you've quoted, that the graph is still opened, even though playback may have finished. In that case, it'll hold a reference to the file which you've played back, which would explain why you can't delete it - e.g. there's a read lock on the file.
Hope this points you in the right direction!

Process Lock Code Illustration Needed

I recently started this question in another thread (to which Reed Copsey
graciously responded) but I don't feel I framed the question well.
At the core of my question, I would like an illustration of how to gain
access to data AS it is being get/set.
I have Page.aspx.cs and, in the codebehind, I have a loop:
List<ServerVariable> files = new List<ServerVariable>();
for (i = 0; i <= Request.Files.Count - 1; i++)
{
m_objFile = Request.Files[i];
m_strFileName = m_objFile.FileName;
m_strFileName = Path.GetFileName(m_strFileName);
files.Add(new ServerVariable(i.ToString(),
this.m_strFileName, "0"));
}
//CODE TO COPY A FILE FOR UPLOAD TO THE
//WEB SERVER
//WHEN THE UPLOAD IS DONE, SET THE ITEM TO
//COMPLETED
int index = files.FindIndex(p => p.Completed == "0");
files[index] = new ServerVariable(i.ToString(),
this.m_strFileName, "1");
The "ServerVariable" type gets and sets ID, File, and Completed.
Now, I need to show the user the file upload "progress" (in effect,
the time between when the loop adds the ServerVariable item to the
list to when the Completed status changes from 0 to 1.
Now, I have a web service method "GetStatus()" that I would like to
use to return the files list (created above) as a JSON string (via
JQuery). Files with a completed status of 0 are still in progress,
files with a 1 are done.
MY QUESTION IS - what does the code inside GetStatus() look like? How
do I query List **as* it is being populated and
return the results real-time? I have been advised that I need to lock
the working process (setting the ServerVariable data) while I query
the values returned in GetStatus() and then unlock that same process?
If I have explained myself well, I'd appreciate a code illustration of
the logic in GetStatus().
Thanks for reading.
Have a look at this link about multi threading locks.
You need to lock the object in both read and write.

Resources