Read from file in playn - playn

Here's a stupid question.
How do you read files in a playn game? I tried using File and Scanner like I usually do in a standard java program:
void readFromFile(){
int x;
int y;
int pixel;
int[][] board;
try{
Scanner scan = new Scanner(new File(in));
x = scan.nextInt();
y = scan.nextInt();
pixel = scan.nextInt();
Point start = new Point(scan.nextInt(), scan.nextInt());
Point dir = new Point(scan.nextInt(), scan.nextInt());
Point end = new Point(scan.nextInt(), scan.nextInt());
int antRoads = scan.nextInt();
board = new int[x][y];
for (int i = 0; i < y; i++){
for (int j = 0; j < x; j++){
board[i][j] = scan.nextInt();
}
}
lev = new Level(board, start, dir, end, antRoads, pixel, x, y);
} catch(FileNotFoundException e){
System.out.println(e);
}
}
I tested File.canRead(), canWrite() and can Execute() and they all returned false.
Am I supposed to use assetMannager().getText() or something? If that's the case can someone tell me how it works? (or what is and how ResourceCallback works?)
My goal is to have a folder named "Maps" filled with maps in regular text-format just like the standard Image folder.
Regards,
Torgeir

You cannot do normal file I/O in a PlayN game, because the games are compiled into JavaScript and run in the browser (when using the HTML5 backend), and the browser supports no file I/O (at least not the general purpose file I/O you would need for these purposes).
Browsers also do not even support the idea of a byte stream, or any sort of binary I/O (this may eventually arrive, but it will be ages before it's supported for all browsers).
So you have to use AssetManager.getText to read data files. You can encode them in JSON if you like and use PlayN.json() to decode them, or you can use your own custom string-based format.
If you don't plan to deploy using the HTML5 backend, you can create a "LevelReader" interface and implement that interface in your Android or iOS backend and make use of the native file I/O capabilities on those platforms.

Related

Video broadcast using NDI SDK 4.5 in iOS 13 not working. Receiver in LAN does not receive any video packets

I have been trying to use NDI SDK 4.5, in a Objective-C iOS-13 app, to broadcast camera capture from iPhone device.
My sample code is in public Github repo: https://github.com/bharatbiswal/CameraExampleObjectiveC
Following is how I send CMSampleBufferRef sampleBuffer:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = VIDEO_CAPTURE_WIDTH;
video_frame.yres = VIDEO_CAPTURE_HEIGHT;
video_frame.FourCC = NDIlib_FourCC_type_UYVY; // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
video_frame.line_stride_in_bytes = VIDEO_CAPTURE_WIDTH * VIDEO_CAPTURE_PIXEL_SIZE;
video_frame.p_data = CVPixelBufferGetBaseAddress(pixelBuffer);
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame);
I have been using "NewTek NDI Video Monitor" to receive the video from network. However, even though it shows as source, the video does not play.
Has anyone used NDI SDK in iOS to build broadcast sender or receiver functionalities? Please help.
You should use kCVPixelFormatType_32BGRA in video settings. And NDIlib_FourCC_type_BGRA as FourCC in NDIlib_video_frame_v2_t.
Are you sure about your VIDEO_CAPTURE_PIXEL_SIZE ?
When I worked with NDI on macos I had the same black screen problem and it was due to a wrong line stride.
Maybe this can help : https://developer.apple.com/documentation/corevideo/1456964-cvpixelbuffergetbytesperrow?language=objc ?
Also it seems the pixel formats from core video and NDI don't match.
On the core video side you are using Bi-Planar Y'CbCr 8-bit 4:2:0, and on the NDI side you are using NDIlib_FourCC_type_UYVY which is Y'CbCr 4:2:2.
I cannot find any Bi-Planar Y'CbCr 8-bit 4:2:0 pixel format on the NDI side.
You may have more luck using the following combination:
core video: https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers/kcvpixelformattype_420ypcbcr8planarfullrange?language=objc
NDI: NDIlib_FourCC_type_YV12
Hope this helps!
In my experience, you have two mistake. To use CVPixelBuffer's CVPixelBufferGetBaseAddress, the CVPixelBufferLockBaseAddress method must be called first. Otherwise, it returns a null pointer.
https://developer.apple.com/documentation/corevideo/1457128-cvpixelbufferlockbaseaddress?language=objc
Secondly, NDI does not support YUV420 biplanar. (The default format for iOS cameras.) More precisely, NDI only accepts one data pointer. In other words, you have to merge the biplanar memory areas into one, and then pass it in NV12 format. See the NDI document for details.
So your code should look like this: And if sending asynchronously instead of NDIlib_send_send_video_v2, a strong reference to the transferred memory area must be maintained until the transfer operation by the NDI library is completed.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = (int)CVPixelBufferGetWidth(pixelBuffer);
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
NDIlib_FourCC_video_type_e ndiVideoFormat;
uint8_t* pixelData;
int stride;
if (pixelFormat == kCVPixelFormatType_32BGRA) {
ndiVideoFormat = NDIlib_FourCC_type_BGRA;
pixelData = (uint8_t*)CVPixelBufferGetBaseAddress(pixelBuffer); // Or copy for asynchronous transmit.
stride = width * 4;
} else if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
ndiVideoFormat = NDIlib_FourCC_type_NV12;
pixelData = (uint8_t*)malloc(width * height * 1.5);
uint8_t* yPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int yPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int ySize = yPlaneBytesPerRow * height;
uint8_t* uvPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
int uvPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int uvSize = uvPlaneBytesPerRow * height;
stride = yPlaneBytesPerRow;
memcpy(pixelData, yPlane, ySize);
memcpy(pixelData + ySize, uvPlane, uvSize);
} else {
return;
}
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = width;
video_frame.yres = height;
video_frame.FourCC = ndiVideoFormat;
video_frame.line_stride_in_bytes = stride;
video_frame.p_data = pixelData;
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame); // synchronous sending.
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// For synchrnous sending case. Free data or use pre-allocated memory.
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
free(pixelData);
}

Finding pointer with 'find out what writes to this address' strange offset

I'm trying to find a base pointer for UrbanTerror42.
My setup is as followed, I have a server with 2 players.
cheat-engine runs on client a.
I climb a ladder with client b and then scan for incease/decrease.
When I have found the values, I use find out what writes to this address.
But the offset are very high and point to empty memory.
I don't really know how to proceed
For the sake of clarity, I have looked up several other values and they have the same problem
I've already looked at a number of tutorials and forums, but that's always about values where the offsets are between 0 and 100 and not 80614.
I would really appreciate it if someone could tell me why this happened and what I have to do/learn to proceed.
thanks in advance
Urban Terror uses the Quake Engine. Early versions of this engine use the Quake Virtual Machine and the game logic is implemented as bytecode which is compiled into assembly by the Quake Virtual Machine. Custom allocation routines are used to load these modules into memory, relative and hardcoded offsets/addresses are created at runtime to accommodate these relocations and do not use the normal relocation table method of the portable executable file format. This is why you see these seemingly strange numbers that change every time you run the game.
The Quake Virtual Machines are file format .qvm and these qvms in memory are tracked in the QVM table. You must find the QVM table to uncover this mystery. Once you find the 2-3 QVMs and record their addresses, finding the table is easy, as you're simply doing a scan for pointers that point to these addresses and narrowing down your results by finding those which are close in memory to each other.
The QVM is defined like:
struct vmTable_t
{
vm_t vm[3];
};
struct vm_s {
// DO NOT MOVE OR CHANGE THESE WITHOUT CHANGING THE VM_OFFSET_* DEFINES
// USED BY THE ASM CODE
int programStack; // the vm may be recursively entered
intptr_t(*systemCall)(intptr_t *parms);
//------------------------------------
char name[MAX_QPATH];
// for dynamic linked modules
void *dllHandle;
intptr_t entryPoint; //(QDECL *entryPoint)(int callNum, ...);
void(*destroy)(vm_s* self);
// for interpreted modules
qboolean currentlyInterpreting;
qboolean compiled;
byte *codeBase;
int codeLength;
int *instructionPointers;
int instructionCount;
byte *dataBase;
int dataMask;
int stackBottom; // if programStack < stackBottom, error
int numSymbols;
struct vmSymbol_s *symbols;
int callLevel; // counts recursive VM_Call
int breakFunction; // increment breakCount on function entry to this
int breakCount;
BYTE *jumpTableTargets;
int numJumpTableTargets;
};
typedef struct vm_s vm_t;
The value in EAX in your original screenshot should be the same as either the codeBase or dataBase member variable of the QVM structure. The offsets are just relative to these addresses. Similarly to how you deal with ASLR, you must calculate the addresses at runtime.
Here is a truncated version of my code that does exactly this and additionally grabs important structures from memory, as an example:
void OA_t::GetVM()
{
cg = nullptr;
cgs = nullptr;
cgents = nullptr;
bLocalGame = false;
cgame = nullptr;
for (auto &vm : vmTable->vm)
{
if (strstr(vm.name, "qagame")) { bLocalGame = true; continue; }
if (strstr(vm.name, "cgame"))
{
cgame = &vm;
gamestatus = GSTAT_GAME;
//char* gamestring = Cvar_VariableString("fs_game");
switch (cgame->instructionCount)
{
case 136054: //version 88
cgents = (cg_entities*)(cgame->dataBase + 0x1649c);
cg = (cg_t*)(cgame->dataBase + 0xCC49C);
cgs = (cgs_t*)(cgame->dataBase + 0xf2720);
return;
Full source code for reference available at OpenArena Aimbot Source Code, it even includes a video overview of the code.
Full disclosure: that is a link to my website and the only viable resource I know of that covers this topic.

JavaFx media bytes

I want to create a soundwave in my java programm from an mp3 file. I researched and found out, that for wav-files I need to use the AudioInputStream and calculate an byte array... From mp3-File I am using JavaFX media and media-player. Are the bytes from the Inputstream the same like from the Javafx media.getSource().getBytes(); ? An AudioInputStream cant read mp3...
Or how am I supposed to get the values for an mp3 file for soundwave?
Byte from AudioInputStream:
AudioInputStream audioInputStream;
try {
audioInputStream = AudioSystem.getAudioInputStream(next);
int frameLength = (int) audioInputStream.getFrameLength();
int frameSize = (int) audioInputStream.getFormat().getFrameSize();
byte[] bytes = new byte[frameLength * frameSize];
g2.setColor(Color.MAGENTA);
for(int p = 0; p < bytes.length; p++){
g2.fillRect(20 + (p * 3), 50, 2, bytes[p]);
}
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
And from JavaFX:
Media media;
MediaPlayer player;
media = new Media("blablafile");
player = new Mediaplayer(media);
byte[] bytes = media.getSource().getBytes();
The JavaFX Media API does not provide much low-level support as of Java 10. It seems to be designed with only the necessary features to play media, not manipulate it significantly.
That being said, you might want to look at AudioSpectrumListener. I can't promise it will give you what you want (I'm not familiar with computer-audio concepts) but it may allow you to create your sound-wave; at least a crude representation.
You use an AudioSpectrumListener with a MediaPlayer using the corresponding property.
If your calculations don't have to be in real time then you can do them ahead of time using:
byte[] bytes = URI.create(media.getSource()).toURL().openStream().readAllBytes();
Note that if the media is remote, however, that you will end up downloading the bytes twice; once to get the bytes for your sound-wave and again when actually playing the media with a MediaPlayer.
Also, you'll want to do the above on a background thread and not the JavaFX Application thread to avoid the possibility of freezing the UI.

How do you reverse a wav file?

I want to reverse a wav file. I am not sure how to do this however. I have read that you need to reverse the sample stream instead of the byte stream, but I am not sure what people mean by this. Thanks for the help!
A nice way to get some kind of soud manipulation going is to use the Python language + Pygame -
Pygame allows you to read the contents of a .wav file into a value array with 5 or 6 lines of program - in total. The Python language allow you to revese that with a simple expression, and you will need more 2 or 3 function calls to pygame to either save the wav file or play it back.
You cna get the latest Python at http://www.python.org , and them look for instructions on how to install modules with pip - at which point you woill be able to install pygame. Then, learn some Python basics if you don't already know, and follow Pygame's documentations at:
https://www.pygame.org/docs/ref/sndarray.html
That's right, you need to reverse the samples and not the bytes.
Here's a brief summary from this tutorial.
Grab the bytes after the metadata (usually at index 44).
Reverse the samples by using a for loop:
private static byte[] ReverseTheForwardsArrayWithOnlyAudioData(int bytesPerSample, byte[] forwardsArrayWithOnlyAudioData)
{
int length = forwardsArrayWithOnlyAudioData.Length;
byte[] reversedArrayWithOnlyAudioData = new byte[length];
int sampleIdentifier = 0;
for (int i = 0; i < length; i++)
{
if (i != 0 && i % bytesPerSample == 0)
{
sampleIdentifier += 2 * bytesPerSample;
}
int index = length - bytesPerSample - sampleIdentifier + i;
reversedArrayWithOnlyAudioData[i] = forwardsArrayWithOnlyAudioData[index];
}
return reversedArrayWithOnlyAudioData;
}

variable in a header file shared between different projects

I have a solution which includes three projects. one is creating static library i.e .lib file. It contains one header file main.h and one main.cpp file. cpp file contains the definition of functions of header file.
second project is .exe project which includes the header file main.h and calls a function of header file.
third project is also a .exe project which includes the header file and uses a variable flag of header file.
Now both .exe projects are creating different instance of the variable. But I want to share same instance of the variable between the projects dynamically. as I have to map the value generated by one project into other project at the same instant.
Please help me as I am nearing my project deadline.
Thanks for the help.
Here are some part of the code.
main.cpp and main.h are files of .lib project
main.h
extern int flag;
extern int detect1(void);
main.cpp
#include<stdio.h>
#include"main.h"
#include <Windows.h>
#include <ShellAPI.h>
using namespace std;
using namespace cv;
int flag=0;
int detect1(void)
{
int Cx=0,Cy=0,Kx=20,Ky=20,Sx=0,Sy=0,j=0;
//create the cascade classifier object used for the face detection
CascadeClassifier face_cascade;
//use the haarcascade_frontalface_alt.xml library
face_cascade.load("E:\\haarcascade_frontalface_alt.xml");
//System::DateTime now = System::DateTime::Now;
//cout << now.Hour;
//WinExec("E:\\FallingBlock\\FallingBlock\\FallingBlock\\bin\\x86\\Debug\\FallingBlock.exe",SW_SHOW);
//setup video capture device and link it to the first capture device
VideoCapture captureDevice;
captureDevice.open(0);
//setup image files used in the capture process
Mat captureFrame;
Mat grayscaleFrame;
//create a window to present the results
namedWindow("capture", 1);
//create a loop to capture and find faces
while(true)
{
//capture a new image frame
captureDevice>>captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> faces;
//find faces and store them in the vector array
face_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3, CV_HAAR_FIND_BIGGEST_OBJECT|CV_HAAR_SCALE_IMAGE, Size(30,30));
//draw a rectangle for all found faces in the vector array on the original image
for(unsigned int i = 0; i < faces.size(); i++)
{
Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
Point pt2(faces[i].x, faces[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
if(faces.size()>=1)
j++;
Cx = faces[i].x + (faces[i].width / 2);
Cy = faces[i].y + (faces[i].height / 2);
if(j==1)
{
Sx=Cx;
Sy=Cy;
flag=0;
}
}
if(Cx-Sx > Kx)
{
flag = 1;
printf("%d",flag);
}
else
{
if(Cx-Sx < -Kx)
{
flag = 2;
printf("%d",flag);
//update(2);
}
else
{
if(Cy-Sy > Ky)
{
flag = 3;
printf("%d",flag);
//update(3);
}
else
{
if(Cy-Sy < -Ky)
{
flag = 4;
printf("%d",flag);
//update(4);
}
else
if(abs(Cx-Sx) < Kx && abs(Cy-Sy)<Ky)
{
flag = 0;
printf("%d",flag);
//update(0);
}
}
}
}
2nd project's code
face.cpp
#include"main.h"
#include<stdio.h>
int main()
{
detect1();
}
3rd project's code
tetris.cpp
#include"main.h"
int key;
key = flag;
if(key==0)
{
MessageBox(hwnd,"Space2","TetRiX",0);
}
if(key==4)
{
tetris.handleInput(1);
tetris.drawScreen(2);
//MessageBox(hwnd,"Space2","TetRiX",0);
}
You need to look up how to do inter-process communication in the operating system under which your applications will run. (At this point I assume that the processes are running on the same computer.) It looks like you're using Windows (based on seeing a call to "MessageBox") so the simplest means would be for both processes to use RegisterWindowMessage create a commonly-understood message value, and then send the data via LPARAM using either PostMessage or SendMessage. (You'll need each of them to get the window handle of the other, which is reasonably easy.) You'll want to have some sort of exclusion mechanism (mutex or critical section) in both processes to ensure that the shared value can't be read and written at the same time. If both processes can do the "change and exchange" then you'll have an interesting problem to solve if both try to do that at the same time, because you'll have to deal with the possibility of deadlocks over that shared value.
You can also use shared memory, but that's a bit more involved.
If the processes are on different computers you'll need to do it via TCP/IP or a protocol on top of TCP/IP. You could use a pub-sub arrangement--or any number of things. Without an understanding of exactly what you're trying to accomplish, it's difficult to know what to recommend.
(For the record, there is almost no way in a multi-process/multi-threaded O/S to share something "at the same instant." You can get arbitrarily close, but computers don't work like that.)
Given the level of difficulty involved, is there some other design that might make this cleaner? Why do these processes have to exchange information this way? Must it be done using separate processes?

Resources