JavaFx media bytes - javafx

I want to create a soundwave in my java programm from an mp3 file. I researched and found out, that for wav-files I need to use the AudioInputStream and calculate an byte array... From mp3-File I am using JavaFX media and media-player. Are the bytes from the Inputstream the same like from the Javafx media.getSource().getBytes(); ? An AudioInputStream cant read mp3...
Or how am I supposed to get the values for an mp3 file for soundwave?
Byte from AudioInputStream:
AudioInputStream audioInputStream;
try {
audioInputStream = AudioSystem.getAudioInputStream(next);
int frameLength = (int) audioInputStream.getFrameLength();
int frameSize = (int) audioInputStream.getFormat().getFrameSize();
byte[] bytes = new byte[frameLength * frameSize];
g2.setColor(Color.MAGENTA);
for(int p = 0; p < bytes.length; p++){
g2.fillRect(20 + (p * 3), 50, 2, bytes[p]);
}
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
And from JavaFX:
Media media;
MediaPlayer player;
media = new Media("blablafile");
player = new Mediaplayer(media);
byte[] bytes = media.getSource().getBytes();

The JavaFX Media API does not provide much low-level support as of Java 10. It seems to be designed with only the necessary features to play media, not manipulate it significantly.
That being said, you might want to look at AudioSpectrumListener. I can't promise it will give you what you want (I'm not familiar with computer-audio concepts) but it may allow you to create your sound-wave; at least a crude representation.
You use an AudioSpectrumListener with a MediaPlayer using the corresponding property.
If your calculations don't have to be in real time then you can do them ahead of time using:
byte[] bytes = URI.create(media.getSource()).toURL().openStream().readAllBytes();
Note that if the media is remote, however, that you will end up downloading the bytes twice; once to get the bytes for your sound-wave and again when actually playing the media with a MediaPlayer.
Also, you'll want to do the above on a background thread and not the JavaFX Application thread to avoid the possibility of freezing the UI.

Related

Video broadcast using NDI SDK 4.5 in iOS 13 not working. Receiver in LAN does not receive any video packets

I have been trying to use NDI SDK 4.5, in a Objective-C iOS-13 app, to broadcast camera capture from iPhone device.
My sample code is in public Github repo: https://github.com/bharatbiswal/CameraExampleObjectiveC
Following is how I send CMSampleBufferRef sampleBuffer:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = VIDEO_CAPTURE_WIDTH;
video_frame.yres = VIDEO_CAPTURE_HEIGHT;
video_frame.FourCC = NDIlib_FourCC_type_UYVY; // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
video_frame.line_stride_in_bytes = VIDEO_CAPTURE_WIDTH * VIDEO_CAPTURE_PIXEL_SIZE;
video_frame.p_data = CVPixelBufferGetBaseAddress(pixelBuffer);
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame);
I have been using "NewTek NDI Video Monitor" to receive the video from network. However, even though it shows as source, the video does not play.
Has anyone used NDI SDK in iOS to build broadcast sender or receiver functionalities? Please help.
You should use kCVPixelFormatType_32BGRA in video settings. And NDIlib_FourCC_type_BGRA as FourCC in NDIlib_video_frame_v2_t.
Are you sure about your VIDEO_CAPTURE_PIXEL_SIZE ?
When I worked with NDI on macos I had the same black screen problem and it was due to a wrong line stride.
Maybe this can help : https://developer.apple.com/documentation/corevideo/1456964-cvpixelbuffergetbytesperrow?language=objc ?
Also it seems the pixel formats from core video and NDI don't match.
On the core video side you are using Bi-Planar Y'CbCr 8-bit 4:2:0, and on the NDI side you are using NDIlib_FourCC_type_UYVY which is Y'CbCr 4:2:2.
I cannot find any Bi-Planar Y'CbCr 8-bit 4:2:0 pixel format on the NDI side.
You may have more luck using the following combination:
core video: https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers/kcvpixelformattype_420ypcbcr8planarfullrange?language=objc
NDI: NDIlib_FourCC_type_YV12
Hope this helps!
In my experience, you have two mistake. To use CVPixelBuffer's CVPixelBufferGetBaseAddress, the CVPixelBufferLockBaseAddress method must be called first. Otherwise, it returns a null pointer.
https://developer.apple.com/documentation/corevideo/1457128-cvpixelbufferlockbaseaddress?language=objc
Secondly, NDI does not support YUV420 biplanar. (The default format for iOS cameras.) More precisely, NDI only accepts one data pointer. In other words, you have to merge the biplanar memory areas into one, and then pass it in NV12 format. See the NDI document for details.
So your code should look like this: And if sending asynchronously instead of NDIlib_send_send_video_v2, a strong reference to the transferred memory area must be maintained until the transfer operation by the NDI library is completed.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = (int)CVPixelBufferGetWidth(pixelBuffer);
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
NDIlib_FourCC_video_type_e ndiVideoFormat;
uint8_t* pixelData;
int stride;
if (pixelFormat == kCVPixelFormatType_32BGRA) {
ndiVideoFormat = NDIlib_FourCC_type_BGRA;
pixelData = (uint8_t*)CVPixelBufferGetBaseAddress(pixelBuffer); // Or copy for asynchronous transmit.
stride = width * 4;
} else if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
ndiVideoFormat = NDIlib_FourCC_type_NV12;
pixelData = (uint8_t*)malloc(width * height * 1.5);
uint8_t* yPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int yPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int ySize = yPlaneBytesPerRow * height;
uint8_t* uvPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
int uvPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int uvSize = uvPlaneBytesPerRow * height;
stride = yPlaneBytesPerRow;
memcpy(pixelData, yPlane, ySize);
memcpy(pixelData + ySize, uvPlane, uvSize);
} else {
return;
}
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = width;
video_frame.yres = height;
video_frame.FourCC = ndiVideoFormat;
video_frame.line_stride_in_bytes = stride;
video_frame.p_data = pixelData;
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame); // synchronous sending.
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// For synchrnous sending case. Free data or use pre-allocated memory.
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
free(pixelData);
}

Java 9 ImageIO read\write gives a different result than Java 8

The following test fails on Java 9 while passes in Java 8:
#Test
public void getImage_SetValueUsingConstructor_ShouldReturnCorrectValue() throws Exception {
String base64ImageString = "iVBORw0KGgoAAAANSUhEUgAAAAQAAAAECAIAAAAmkwkpAAAAEUlEQVR42mNgQAP/wQAbBw4ANwsL9Zo6V30AAAAASUVORK5CYII=";
byte[] rawImageBytes = Base64.getDecoder().decode(base64ImageString);
ByteArrayInputStream bis = new ByteArrayInputStream(rawImageBytes);
RenderedImage image = ImageIO.read(bis);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ImageIO.write(image, "PNG", bos);
byte[] imageBytesFromImage = bos.toByteArray();
assertArrayEquals(imageBytesFromImage, rawImageBytes);
}
Java 9 output:
arrays first differed at element [42];
Expected :94
Actual :-38
Can anyone help me understand what was changed in Java 9, and is there a way to write this code so that it will work for both Java 8 & 9?
As #Holger has pointed out in the comments, it is really the test that is flawed. While identical Base64 representations will give identical images, different Base64 representations does not mean the image data is different. It could mean only that the same image data is encoded differently, and will decode to the exact same image (which is the case here).
The reason your test used to pass without error, is probably that you used the Java 8 PNGImageWriter (or earlier, it hasn't really changed much since Java 1.4), which is the writer plugin used if you do ImageIO.write(image, "PNG", output), to encode the image and created the Base64 representation from it. If you had created the Base64 representation of the bytes from a file created by a different program/library, it would almost certainly be different.
You should rewrite your test, it is however not really clear to me what you are trying to test here.
If you only care about pixel data, you could just loop over the pixels and test for equality:
BufferedImage original = ImageIO.read(..);
BufferedImage current = ImageIO.read(..);
assertEquals(original.getWidth(), current.getWidth());
assertEquals(original.getHeight(), current.getHeight());
for (int y = 0; y < original.getHeight(); y++) {
for (int x = 0; x < original.getWidth(); x++) {
assertEquals(original.getRGB(x, y), current.getRGB(x, y));
}
}
If you also need the metadata to be preserved, you also need to test for equality there. But PNG doesn't really contain much interesting metadata, so I doubt you need that.
Thanks to Holger for the comments, what I did is to decode an image from the byte array and then compare the dataBuffer of both images.
The test below passed on both Java 8 and
#Test
public void imageTest() throws Exception {
String base64ImageString = "iVBORw0KGgoAAAANSUhEUgAAAAQAAAAECAIAAAAmkwkpAAAAEUlEQVR42mNgQAP/wQAbBw4ANwsL9Zo6V30AAAAASUVORK5CYII=";
byte[] rawImageBytes = Base64.getDecoder().decode(base64ImageString);
ByteArrayInputStream bis = new ByteArrayInputStream(rawImageBytes);
RenderedImage image = ImageIO.read(bis);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
ImageIO.write(image, "PNG", bos);
byte[] imageBytesFromImage = bos.toByteArray();
//assertArrayEquals(imageBytesFromImage, rawImageBytes); //fails on Java 9!
bis = new ByteArrayInputStream(imageBytesFromImage);
RenderedImage image2 = ImageIO.read(bis);
DataBuffer dbA = image.getData().getDataBuffer();
int sizeA = dbA.getSize();
DataBuffer dbB = image2.getData().getDataBuffer();
int sizeB = dbB.getSize();
// compare data-buffer objects //
assertEquals(sizeA, sizeB);
for (int i = 0; i < sizeA; i++) {
assertEquals(dbA.getElem(i), dbB.getElem(i));
}
}
The compare images code was taken from: How to compare images for similarity using java

Qt using threadpools, unable to recieve all the data at once in readyRead()

I'm a newbie in QT and C++, I'm trying to create a QTcpserver using QThreadpools so it can handle multiple clients. Multiple clients are able to connect without any issues. But I'm trying to send an image from an android phone, with a footer "IMGPNG", indicating the end of image data. Now the issue when the readyRead signal is emitted I'm tring to read all the data available data and then perform some string operation later and reconstruct the image. I'm not sure how to receive the complete image for each client and then process it accordingly.
void VireClients::readyRead()//read ready
{
int nsize = socket->bytesAvailable();//trying to check the available bytes
qDebug()<< "Bytes Available" << nsize;
while(socket->bytesAvailable() < nsize){
QByteArray data = socket->readAll();//how to receive all the data and then process it
}
/*!These lines call the threadpool instance and reimplement run*/
imageAnalysis = new VireImageAnalysis(); //creating a new instance of the QRunnable
imageAnalysis->setAutoDelete(true);
connect(imageAnalysis,SIGNAL(ImageAnalysisResult(int)),this,SLOT(TaskResult(int)),Qt::QueuedConnection);
QThreadPool::globalInstance()->start(imageAnalysis);
}
Now i'm not sure how to get the data completely or save the received data in an image format. i want to know how to completely receive the image data. Please help.
A call to readAll() will not always read the complete image as it obviously cannot know the size of the image. It will only read all currently available bytes which might be less than your whole file, or more if the sender is really fast and you cannot catch up reading. The same way readyRead() only informs you that there are bytes available but not that a whole file has been received. It could be a single byte or hundreds of bytes.
Either you know the size of your image in the first place because it is always fixed or the sender has to tell the receiver the number of bytes he wants to sent.
Then you can either just ignore all readyRead() signals until bytesAvailable() matches your image size and call readAll() to read the whole image at once. Or you read whenever there are available bytes and fill up your buffer until the number of bytes read matches the bytes the receiver told you he will send.
Solved saving image issue by collecting, the string in temp variable and finally, used opencv imwrite to save the image, this solved this issue:
while(iBytesAvailable > 0 )
{
if(socket->isValid())
{
char* pzBuff = new char[iBytesAvailable];
int iReadBytes = socket->read(pzBuff, iBytesAvailable);
if( iReadBytes > 0 )
{
result1 += iReadBytes;
str += std::string(reinterpret_cast<char const *>(pzBuff), iReadBytes);
if(str.size() > 0){
search = str.find("IMGPNG");
if(search == result1-6){
finalID = QString::fromStdString(str);
Singleton_Global *strPtr = Singleton_Global::instance();
strPtr->setResult(finalID);
/*!Process the received image here*/
SaveImage= new VSaveImage();
SaveImage->setAutoDelete(false);
connect(SaveImage,SIGNAL(SaveImageResult(QString)),this,SLOT(TaskResult(QString)),Qt::QueuedConnection);
threadPool->start(SaveImage);
}
}
}
Finally did the image saving on the run method -->SaveImage, #DavidSchwartz you were a great help thanks. Thanks all for your help.

Best practice reading file as InputStream

In Dart, I want to read BMP, so could be BIG file.
I do it like this :
var inputStream = imageFile.openInputStream();
inputStream.onData = () {
print(inputStream.available());
inputStream.read(18); // Some headers
int width = _readInt(inputStream.read(4));
int height = _readInt(inputStream.read(4));
// Another stuff ...
}
It works well with little image but when I a read a 3Mo file, the onData is executed many times. Indeed, the onData is trigged by 65536 bytes packets.
What the best practice ?
Should I write a automat with state like HEADER_STATE, COLORS_STATES, ... to set what is my reading state and consider by inputStream.read is a buffer ?
Or I miss a reader class ?
I fear to miss some bytes between 2 packets.
I'm a little disappointed about this, when I do it in java, I just write :
inputStream.read(numberOfBytes);
More easy to use.
Once you have your RandomAccessFile open, you can do something like this:
RandomAccessFile raf; // Initialized elsewhere
int bufferSize = 1024*1024; // 1 MB
int offsetIntoFile = 0;
Uint8List byteBuffer = new Uint8List(bufferSize); // 1 MB
Future<int> bytesReadFuture = raf.readList(byteBuffer, offsetIntoFile, bufferSize);
bytesReadFuture.then((bytesRead) {
Do something with byteBuffer here.
});
There is also a synchronous call readListSync.
John

Read from file in playn

Here's a stupid question.
How do you read files in a playn game? I tried using File and Scanner like I usually do in a standard java program:
void readFromFile(){
int x;
int y;
int pixel;
int[][] board;
try{
Scanner scan = new Scanner(new File(in));
x = scan.nextInt();
y = scan.nextInt();
pixel = scan.nextInt();
Point start = new Point(scan.nextInt(), scan.nextInt());
Point dir = new Point(scan.nextInt(), scan.nextInt());
Point end = new Point(scan.nextInt(), scan.nextInt());
int antRoads = scan.nextInt();
board = new int[x][y];
for (int i = 0; i < y; i++){
for (int j = 0; j < x; j++){
board[i][j] = scan.nextInt();
}
}
lev = new Level(board, start, dir, end, antRoads, pixel, x, y);
} catch(FileNotFoundException e){
System.out.println(e);
}
}
I tested File.canRead(), canWrite() and can Execute() and they all returned false.
Am I supposed to use assetMannager().getText() or something? If that's the case can someone tell me how it works? (or what is and how ResourceCallback works?)
My goal is to have a folder named "Maps" filled with maps in regular text-format just like the standard Image folder.
Regards,
Torgeir
You cannot do normal file I/O in a PlayN game, because the games are compiled into JavaScript and run in the browser (when using the HTML5 backend), and the browser supports no file I/O (at least not the general purpose file I/O you would need for these purposes).
Browsers also do not even support the idea of a byte stream, or any sort of binary I/O (this may eventually arrive, but it will be ages before it's supported for all browsers).
So you have to use AssetManager.getText to read data files. You can encode them in JSON if you like and use PlayN.json() to decode them, or you can use your own custom string-based format.
If you don't plan to deploy using the HTML5 backend, you can create a "LevelReader" interface and implement that interface in your Android or iOS backend and make use of the native file I/O capabilities on those platforms.

Resources