Changing resolution using media foundation under win 7 - directshow

I am writing an app on top media foundation under win 7, I use IMFMediaSource to query the cameras interfaces to get frames and other properties. its weird but I cant find a way to change resolution. it seems that if I used IMFCaptureSource i could use SetCurrentDeviceMediaType to change resolution but its only supported in Windows 8. so we cant change resolution under win 7 using media foundation?? is there a way to use direct show with IMFMediaSource to change resolution??
if so, can anyone help with some code sample?
thanks!

ok, so I found out eventually. Iam using IMFSourceReader to get samples from the MFMediaSource, so after configurating the SourceReader you can iterate the native media types that the camera supports like this :
HRESULT nativeTypeErrorCode = S_OK;
DWORD count = 0;
UINT32 streamIndex = 0;
UINT32 requiredWidth = 1600;
UINT32 requiredheight = 900;
while (nativeTypeErrorCode == S_OK)
{
IMFMediaType * nativeType = NULL;
nativeTypeErrorCode = m_pReader->GetNativeMediaType(streamIndex, count, &nativeType);
if(nativeTypeErrorCode != S_OK) continue;
// get the media type
GUID nativeGuid = {0};
hr = nativeType->GetGUID(MF_MT_SUBTYPE, &nativeGuid);
if (FAILED(hr)) return hr;
UINT32 width, height;
hr = ::MFGetAttributeSize(nativeType, MF_MT_FRAME_SIZE, &width, &height);
if (FAILED(hr)) return hr;
if(nativeGuid == <my type guid> && width == requiredWidth && height == requiredheight)
{
// found native config, set it
hr = m_pReader->SetCurrentMediaType(streamIndex, NULL, nativeType);
if (FAILED(hr)) return hr;
break;
}
SafeRelease(&nativeType);
count++;
}
this means that Iam not creating a new media type with the resolution I require, I get the native media type with the configuration I need, and I set it on the SourceReader.
hope it will help the future media foundation traveler... :)

You can query directshow interface from IMediaSource which can change resolution.
for ex:for Camera control properties I do like this.
IAMCameraControl* m_pCameraControl = NULL;
HRESULT hr = S_OK;
hr = pMediaSource->QueryInterface(IID_PPV_ARGS(&m_pCameraControl));
if (m_pCameraControl == NULL)
{
return E_FAIL;
}
In the same way in your case I am not sure about the interface but I guess it will be in following way.
IAMStreamConfig * m_pStreamConfig = NULL;
HRESULT hr = S_OK;
hr = pMediaSource->QueryInterface(IID_PPV_ARGS(&m_pStreamConfig ));
if (m_pCameraControl == NULL)
{
return E_FAIL;
}

Related

What could be reasons for this android EditText control to convert the input to the ascii sequence

So for some project i'm working with Xamarin.Forms.
Since one area is just unbearably slow with Xamarin.Forms i've used a CustomRenderer to solve one particular area where a list is involved.
After getting back to the project and upgrading packages, i've suddenly got the weirdest bug.
I am setting "1234" to an EditText, and the EditText.Text Property is suddenly "49505152" - the string is converted to its ascii equivalent.
Is this a known issue? Does anyone know how to fix it?
The cause of the issue was that my EditText had an InputFilter applied and that after updating a package suddenly another code path of FilterFormatted was executed.
public ICharSequence FilterFormatted(ICharSequence source, int start, int end, ISpanned dest, int dstart, int dend)
{
var startSection = dest.SubSequenceFormatted(0, dstart);
var insert = source.SubSequenceFormatted(start, end);
var endSection = dest.SubSequenceFormatted(dstart, dest.Length());
var merged = $"{startSection}{insert}{endSection}";
if (ValidationRegex.IsMatch(merged) && InputRangeCheck(merged, CultureInfo.InvariantCulture))
{
StringBuilder sb = new StringBuilder(end - start);
for (int i = start; i < end; i++)
{
char c = source.CharAt(i);
sb.Append(c);
}
if (source is ISpanned) {
SpannableString sp = new SpannableString(sb);
TextUtils.CopySpansFrom((ISpanned)source, start, sb.Length(), null, sp, 0);
return sp;
} else {
// AFTER UPDATE THIS PATH WAS ENTERED UNLIKE BEFORE
return sb;
}
}
else
{
return new SpannableString(string.Empty);
}
}

c++ builder: convert video to png-snapshots with directshow

Thanks to your help I was able to search for the right words to use directshow a bit better.
I found a tutorial how to use the SampleGrabber-object here:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd407288%28v=vs.85%29.aspx
I could implement it and modified it a bit so it isn't just saving the first frame, but every Frame to a PNG. For that I use corona.
However, I just guessed something around and don't quite know which buffers are containing my data and in which form.
So, I have basically 3 questions:
Am I using SavePNG right? the resulting Images are upside-down!
Can I replace the BaseFilter for the video with one that is connected to a camera?
Contains pBuffer my Imagedata so I can get rgb-byte-informations by simply type pBuffer[123]?
I'm using embarcadero's C++-Builder (XE2 16).
Here is the code I found at the website, a bit modified (error-handling removed for better view. after each hr=... there is a Failed-check):
void __fastcall TForm1::btn_kameraClick(TObject *Sender)
{
HRESULT hr = S_OK;
IGraphBuilder *pGraph = NULL;
IMediaControl *pControl = NULL;
IMediaEventEx *pEvent = NULL;
IBaseFilter *pGrabberF = NULL;
ISampleGrabber *pGrabber = NULL;
IBaseFilter *pSourceF = NULL;
IEnumPins *pEnum = NULL;
IPin *pPin = NULL;
IBaseFilter *pNullF = NULL;
BYTE *pBuffer = NULL;
hr = CoCreateInstance(CLSID_FilterGraph, NULL,CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pGraph));
hr = pGraph->QueryInterface(IID_PPV_ARGS(&pControl));
hr = pGraph->QueryInterface(IID_PPV_ARGS(&pEvent));
hr = CoCreateInstance(CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pGrabberF));
hr = pGraph->AddFilter(pGrabberF, L"Sample Grabber");
hr = pGrabberF->QueryInterface(IID_PPV_ARGS(&pGrabber));
AM_MEDIA_TYPE mt;
ZeroMemory(&mt, sizeof(mt));
mt.majortype = MEDIATYPE_Video;
mt.subtype = MEDIASUBTYPE_RGB24;
hr = pGrabber->SetMediaType(&mt);
hr = pGraph->AddSourceFilter(L"C:/Users/Julian/Desktop/homogenität/1,1x_2,7y.mpg", L"Source", &pSourceF);
hr = pSourceF->EnumPins(&pEnum);
while (S_OK == pEnum->Next(1, &pPin, NULL))
{
hr = ConnectFilters(pGraph, pPin, pGrabberF);
SafeRelease(&pPin);
if (SUCCEEDED(hr))break;
}
hr = CoCreateInstance(CLSID_NullRenderer, NULL, CLSCTX_INPROC_SERVER,IID_PPV_ARGS(&pNullF));
hr = pGraph->AddFilter(pNullF, L"Null Filter");
hr = ConnectFilters(pGraph, pGrabberF, pNullF);
hr = pGrabber->SetOneShot(TRUE);
hr = pGrabber->SetBufferSamples(TRUE);
long evCode=0;
long cbBuffer=0;
hr = pControl->Run();
hr = pEvent->WaitForCompletion(INFINITE, &evCode);
hr = pGrabber->GetCurrentBuffer(&cbBuffer, NULL);
pBuffer = (BYTE*)CoTaskMemAlloc(cbBuffer);
hr = pGrabber->GetConnectedMediaType(&mt);
CComQIPtr< IMediaSeeking, &IID_IMediaSeeking > pSeeking( pGraph );
// for(int i=0;i<10;i++){
bool hui=true;int i=0;
while(hui){
REFERENCE_TIME Start = i * UNITS;
hr = pSeeking->SetPositions( &Start, AM_SEEKING_AbsolutePositioning,NULL, AM_SEEKING_NoPositioning );
// Sleep(10);
hr = pEvent->WaitForCompletion(INFINITE,&evCode);
if(hr!=0)hui=false;
hr = pGrabber->GetCurrentBuffer(&cbBuffer, (long*)pBuffer);
if ((mt.formattype == FORMAT_VideoInfo) &&(mt.cbFormat >= sizeof(VIDEOINFOHEADER)) &&(mt.pbFormat != NULL))
{
VIDEOINFOHEADER *pVih = (VIDEOINFOHEADER*)mt.pbFormat;
// hr = WriteBitmap(("hui"+(String)i+".bmp").c_str(), &pVih->bmiHeader, mt.cbFormat - SIZE_PREHEADER, pBuffer, cbBuffer);
hr = SavePNG(i,pBuffer, pVih->bmiHeader.biWidth, pVih->bmiHeader.biHeight);
}
else hr = VFW_E_INVALIDMEDIATYPE;
i++;
}
FreeMediaType(mt);
done:
CoTaskMemFree(pBuffer);
SafeRelease(&pPin);
SafeRelease(&pEnum);
SafeRelease(&pNullF);
SafeRelease(&pSourceF);
SafeRelease(&pGrabber);
SafeRelease(&pGrabberF);
SafeRelease(&pControl);
SafeRelease(&pEvent);
SafeRelease(&pGraph);
}
bool SavePNG(int i, Byte* m_pImageData,long m_Width,long m_Height)
{
// Make sure there is image data
if (!m_pImageData)
return false;
stringstream FilePath;
FilePath << "hui"<< i<<".png";
// Create a corona image
corona::Image* pImage = corona::CreateImage(m_Width, m_Height, corona::PF_R8G8B8, m_pImageData);
// Make sure the image was created
if (!pImage)
return false;
// Save the image to a PNG file
corona::SaveImage(FilePath.str().c_str(), corona::FF_PNG, pImage);
// Delete the corona image
delete pImage;
// Nothing went wrong
return true;
}
I hope I have done nothing horribly wrong... I really tried to research everything^^
Does somebody knows about my 3 questions above?
I you found something really wrong here, I would also appreciate for you to tell me, so I can fix and improve.
Regards,
Julian
Now when you have a video frame in 24-bit RGB format, all you need is to compress to PNG. You have choices here:
libpng
GDI+
WIC
Possibly, C++ bulder has native classes to cover PNG as well.
P.S. DirectShow API you are using is not DirectX, it is a part of Windows core SDK.

How to get data from directshow filter output pin?

I have direct show filter which takes an input and process it and give the result to outputpin.
I want to write this filter output data to a file...And i want to do it in its filter class.So i want to get the output pin buffer data.
Shortly how to reach final data of outputpin in its filter? How can i do it?
Not: The output pin is derived from CBaseOutputPin.This is an open source filter it "magically" :-) put wright data to its output pin which i can not figure out how yet...
Update:
Here is the siutuation:
Media Source ----> GFilter ----> FileWriter
I have source code of GFilter... I have no source code of FileWriter...What i want to make is make GFilter write its own data...I debug GFilter get some insight how its transform data but my attemp to write this data result with wrong data... So i deceide for now how to simply get data at its output pin...
Update[2]
In Filter outputpin somwhere the filter writer pass the file writer pin to IStreamPtr variable...Everthing seems to written to a variable m_pIStream which is type of [IStreamPtr]
GFilterOutput::CompleteConnect(IPin *pReceivePin)
{
// make sure that this is the file writer, supporting
// IStream, or we will not be able to write out the metadata
// at stop time
// m_pIStream is IStreamPtr type
m_pIStream = pReceivePin;
if (m_pIStream == NULL)
{
return E_NOINTERFACE;
}
return CBaseOutputPin::CompleteConnect(pReceivePin);
}
...
GFilterOutput::Replace(LONGLONG pos, const BYTE* pBuffer, long cBytes)
{
//OutputDebugStringA("DEBUG: Now at MuxOutput Replace");
// all media content is written when the graph is running,
// using IMemInputPin. On stop (during our stop, but after the
// file writer has stopped), we switch to IStream for the metadata.
// The in-memory index is updated after a successful call to this function, so
// any data not written on completion of Stop will not be in the index.
CAutoLock lock(&m_csWrite);
HRESULT hr = S_OK;
if (m_bUseIStream)
{
IStreamPtr pStream = GetConnected();
if (m_pIStream == NULL)
{
hr = E_NOINTERFACE;
} else {
LARGE_INTEGER liTo;
liTo.QuadPart = pos;
ULARGE_INTEGER uliUnused;
hr = m_pIStream->Seek(liTo, STREAM_SEEK_SET, &uliUnused);
if (SUCCEEDED(hr))
{
ULONG cActual;
hr = m_pIStream->Write(pBuffer, cBytes, &cActual);
if (SUCCEEDED(hr) && ((long)cActual != cBytes))
{
hr = E_FAIL;
}
}
}
} else {
// where the buffer boundaries lie is not important in this
// case, so break writes up into the buffers.
while (cBytes && (hr == S_OK))
{
IMediaSamplePtr pSample;
hr = GetDeliveryBuffer(&pSample, NULL, NULL, 0);
if (SUCCEEDED(hr))
{
long cThis = min(pSample->GetSize(), cBytes);
BYTE* pDest;
pSample->GetPointer(&pDest);
CopyMemory(pDest, pBuffer, cThis);
pSample->SetActualDataLength(cThis);
// time stamps indicate file position in bytes
LONGLONG tStart = pos;
LONGLONG tEnd = pos + cThis;
pSample->SetTime(&tStart, &tEnd);
hr = Deliver(pSample);
if (SUCCEEDED(hr))
{
pBuffer += cThis;
cBytes -= cThis;
pos += cThis;
}
}
}
}
return hr;
}
You have full source code, step it through with debugger until you reach the point where your filter calls IPin::Receive of the peer downstream filter, update/override code there and you have full control as for writing data into file etc.

Negotiating an allocator between Directshow filters fails

I'm developing a custom Directshow source filter to provide decompressed video data to a rendering filter. I've used the PushSource sample provided by the Directshow SDK as a basis for my filter. I'm attempting to connect this to a VideoMixingRenderer9 filter.
When creating the graph I'm calling ConnectDirect():
HRESULT hr = mp_graph_builder->ConnectDirect(OutPin, InPin, &mediaType);
but during this call, calling SetProperties on the downstream filters allocator (in DecideBufferSize()), fails with D3DERR_INVALIDCALL (0x8876086c):
ALLOCATOR_PROPERTIES actual;
memset(&actual,0,sizeof(actual));
hr = pAlloc->SetProperties(pRequest, &actual);
If I let it try to use my allocator (the one provided by CBaseOutputPin) when setting the allocator on the downstream filter, this fails with E_FAIL (in CBaseOutputPin::DecideAllocator)
hr = pPin->NotifyAllocator(*ppAlloc, FALSE);
Any help would be much appreciated!
Thanks.
EDIT:
This is the media type provided by GetMediaType
VIDEOINFOHEADER *pvi = (VIDEOINFOHEADER*)pMediaType->AllocFormatBuffer(sizeof(VIDEOINFOHEADER));
if (pvi == 0)
return(E_OUTOFMEMORY);
ZeroMemory(pvi, pMediaType->cbFormat);
pvi->AvgTimePerFrame = m_rtFrameLength;
pMediaType->formattype = FORMAT_VideoInfo;
pMediaType->majortype = MEDIATYPE_Video;
pMediaType->subtype = MEDIASUBTYPE_RGB24;
pMediaType->bTemporalCompression = FALSE;
pMediaType->bFixedSizeSamples = TRUE;
pMediaType->formattype = FORMAT_VideoInfo;
pvi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pvi->bmiHeader.biWidth = (640 / 128 + 1) * 128;
pvi->bmiHeader.biHeight = -480; // negative so top down..
pvi->bmiHeader.biPlanes = 1;
pvi->bmiHeader.biBitCount = 24;
pvi->bmiHeader.biCompression = NULL; // ok if rgb else use MAKEFOURCC(...)
pvi->bmiHeader.biSizeImage = GetBitmapSize(&pvi->bmiHeader);
pvi->bmiHeader.biClrImportant = 0;
pvi->bmiHeader.biClrUsed = 0; //Use max colour depth
pvi->bmiHeader.biXPelsPerMeter = 0;
pvi->bmiHeader.biYPelsPerMeter = 0;
SetRectEmpty(&(pvi->rcSource));
SetRectEmpty(&(pvi->rcTarget));
pvi->rcSource.bottom = 480;
pvi->rcSource.right = 640;
pvi->rcTarget.bottom = 480;
pvi->rcTarget.right = 640;
pMediaType->SetType(&MEDIATYPE_Video);
pMediaType->SetFormatType(&FORMAT_VideoInfo);
pMediaType->SetTemporalCompression(FALSE);
const GUID SubTypeGUID = GetBitmapSubtype(&pvi->bmiHeader);
pMediaType->SetSubtype(&SubTypeGUID);
pMediaType->SetSampleSize(pvi->bmiHeader.biSizeImage);
and DecideBufferSize where pAlloc->SetProperties is called
HRESULT CPushPinBitmap::DecideBufferSize(IMemAllocator *pAlloc, ALLOCATOR_PROPERTIES *pRequest) {
HRESULT hr;
CAutoLock cAutoLock(CBasePin::m_pLock);
CheckPointer(pAlloc, E_POINTER);
CheckPointer(pRequest, E_POINTER);
if (pRequest->cBuffers == 0) {
pRequest->cBuffers = 2;
}
pRequest->cbBuffer = 480 * ( (640 / 128 + 1) * 128 ) * 3;
ALLOCATOR_PROPERTIES actual;
memset(&actual,0,sizeof(actual));
hr = pAlloc->SetProperties(pRequest, &actual);
if (FAILED(hr)) {
return hr;
}
if (actual.cbBuffer < pRequest->cbBuffer) {
return E_FAIL;
}
return S_OK;
}
The constants are only temporary!
There is no way you can use your own allocator with VMR/EVR filters. They just insist on their own, which in turn is backed on DirectDraw/Direct3D surfaces.
To connect directly to VMR/EVR filters you need a different strategy. The allocator is always theirs. You need to support extended strides. See Handling Format Changes from the Video Renderer.

How to identify CMYK images in ASP.NET using C#

Does anybody know how to properly identify CMYK images in ASP.NET using C#? When I check the Flags attribute of a Bitmap instance, I get incorrect results.
I have created three images to test this: cmyk.jpg, rgb.jpg and gray.jpg. These are respectively CMYK, RGB and Grayscale images.
This is my test code:
static void Main(string[] args)
{
Bitmap bmpCMYK = new Bitmap("cmyk.jpg");
Bitmap bmpRGB = new Bitmap("rgb.jpg");
Bitmap bmpGray = new Bitmap("gray.jpg");
Console.WriteLine("\t\tRgb\tCmyk\tGray\tYcbcr\tYcck\tPixelFormat");
Console.WriteLine("cmyk.jpg\t{0}\t{1}\t{2}\t{3}\t{4}\t{5}",
IsSet(bmpCMYK, System.Drawing.Imaging.ImageFlags.ColorSpaceRgb),
IsSet(bmpCMYK, System.Drawing.Imaging.ImageFlags.ColorSpaceCmyk),
IsSet(bmpCMYK, System.Drawing.Imaging.ImageFlags.ColorSpaceGray),
IsSet(bmpCMYK, System.Drawing.Imaging.ImageFlags.ColorSpaceYcbcr),
IsSet(bmpCMYK, System.Drawing.Imaging.ImageFlags.ColorSpaceYcck),
bmpCMYK.PixelFormat);
Console.WriteLine("rgb.jpg\t\t{0}\t{1}\t{2}\t{3}\t{4}\t{5}",
IsSet(bmpRGB, System.Drawing.Imaging.ImageFlags.ColorSpaceRgb),
IsSet(bmpRGB, System.Drawing.Imaging.ImageFlags.ColorSpaceCmyk),
IsSet(bmpRGB, System.Drawing.Imaging.ImageFlags.ColorSpaceGray),
IsSet(bmpRGB, System.Drawing.Imaging.ImageFlags.ColorSpaceYcbcr),
IsSet(bmpRGB, System.Drawing.Imaging.ImageFlags.ColorSpaceYcck),
bmpRGB.PixelFormat);
Console.WriteLine("gray.jpg\t{0}\t{1}\t{2}\t{3}\t{4}\t{5}",
IsSet(bmpGray, System.Drawing.Imaging.ImageFlags.ColorSpaceRgb),
IsSet(bmpGray, System.Drawing.Imaging.ImageFlags.ColorSpaceCmyk),
IsSet(bmpGray, System.Drawing.Imaging.ImageFlags.ColorSpaceGray),
IsSet(bmpGray, System.Drawing.Imaging.ImageFlags.ColorSpaceYcbcr),
IsSet(bmpGray, System.Drawing.Imaging.ImageFlags.ColorSpaceYcck),
bmpGray.PixelFormat);
bmpCMYK.Dispose();
bmpRGB.Dispose();
bmpGray.Dispose();
Console.ReadLine();
}
private static bool IsSet(Bitmap bitmap, System.Drawing.Imaging.ImageFlags flag)
{
return (bitmap.Flags & (int)flag) == (int)flag;
}
This produces the following output:
I have checked the actual images and cmyk.jpg really is a CMYK image.
Apparently, this is a "known issue". Alex Gil had the same problem in WPF (see this question: How to identify CMYK images using C#) and he managed to solve it by using a BitmapDecoder class to load the images. I'm a bit uncomfortable using that solution in ASP.NET because it requires me to add references to WindowsBase.dll and PresentationCore.dll and I'm not sure I want those in a web project.
Does anyone know of any other pure .NET solutions to check if an image is in the CMYK format that I can safely use in ASP.NET?
I use a combination of the ImageFlags and PixelFormat values. Note that PixelFormat.Forma32bppCMYK is missing from .NET - I grabbed it out of GdiPlusPixelFormats.h in the Windows SDK.
The trick is that Windows 7 and Server 2008 R2 returns the correct pixel format but is missing the image flags. Vista and Server 2008 return an invalid pixel format but the correct image flags. Insanity.
public ImageColorFormat GetColorFormat(this Bitmap bitmap)
{
const int pixelFormatIndexed = 0x00010000;
const int pixelFormat32bppCMYK = 0x200F;
const int pixelFormat16bppGrayScale = (4 | (16 << 8);
// Check image flags
var flags = (ImageFlags)bitmap.Flags;
if (flags.HasFlag(ImageFlags.ColorSpaceCmyk) || flags.HasFlag(ImageFlags.ColorSpaceYcck))
{
return ImageColorFormat.Cmyk;
}
else if (flags.HasFlag(ImageFlags.ColorSpaceGray))
{
return ImageColorFormat.Grayscale;
}
// Check pixel format
var pixelFormat = (int)bitmap.PixelFormat;
if (pixelFormat == pixelFormat32bppCMYK)
{
return ImageColorFormat.Cmyk;
}
else if ((pixelFormat & pixelFormatIndexed) != 0)
{
return ImageColorFormat.Indexed;
}
else if (pixelFormat == pixelFormat16bppGrayScale)
{
return ImageColorFormat.Grayscale;
}
// Default to RGB
return ImageColorFormat.Rgb;
}
public enum ImageColorFormat
{
Rgb,
Cmyk,
Indexed,
Grayscale
}
An idea: If you dont want to reference those dll's in your web project, you could do the processing outside the web project, in a service, which may be better anyway?
You might check out FreeImage which is a win32 DLL but has a .NET wrapper, I am using it in a production enviroment and it's great.
I would be surprised if it couldn't provide this information.
(edit) I didn't notice before you asked for pure .NET solutions - so maybe this won't work - but I have found it a useful supplement to the limitations of the .NET framework for image manipulation.
Another idea, if you only need to identify the format, is to extract that directly from the file. I have no idea how complex the specification for the JPEG format might be, but hey, it's only 29 pages!
As previously answered, the most reliable way will be to parse the file's header to retrieve this data.
So here is how I solved the issue you were having which was the same as what I was having. Everything in csharp looks to return rgb info when you know it's a 100% a cymk image. So what to do, well go to the root and read the file. Here is what I had done and tested to work well and should cover all OS's, and 50 for 50 imgs tested right. This is 2.0 too just in case.
public bool isByteACMYK(Stream image)
{
using (StreamReader sr = new StreamReader(image))
{
string contents = sr.ReadToEnd();
if (contents.ToLower().Contains("cmyk"))
{
return true;
}
}
return false;
}
public bool isFileACMYKJpeg(System.Drawing.Image image)
{
System.Drawing.Imaging.ImageFlags flagValues = (System.Drawing.Imaging.ImageFlags)Enum.Parse(typeof(System.Drawing.Imaging.ImageFlags), image.Flags.ToString());
if (flagValues.ToString().ToLower().IndexOf("ycck") == -1)
{
// based on http://www.maxostudio.com/Tut_CS_CMYK.cfm
bool ret = false;
try{
int cmyk = (image.Flags & (int)ImageFlags.ColorSpaceCmyk);
int ycck = (image.Flags & (int)ImageFlags.ColorSpaceYcck);
ret = ((cmyk > 0) || (ycck > 0));
} catch (Exception ex){
}
return ret;
}
return true;
}
// my upload test .. but you could turn a file to stream and do the same
public void UpdatePool(HttpPostedFile newimage)
{
if (newimage.ContentLength != 0)
{
Stream stream = newimage.InputStream;
MemoryStream memoryStream = new MemoryStream();
CopyStream(stream,memoryStream);
memoryStream.Position = 0;
stream = memoryStream;
System.Drawing.Image processed_image = null;
processed_image = System.Drawing.Image.FromStream(newimage.InputStream);
if (imageService.isFileACMYKJpeg(processed_image) || imageService.isByteACMYK(stream))
{
Flash["error"] = "You have uploaded a CMYK image. Please conver to RGB first.";
RedirectToReferrer();
return;
}
}
}
cheers - Jeremy
I was under the assumption that everything in .NET was based on RGB, aRGB and grayscale (as grayscale is RGB(128, 128, 128)).
If my assumption is correct then you will have to go the third party route.

Resources