Programmatically read Adobe game variables / interact with the game / game bot - adobe

Basicly, I'm trying to read a game's chat and catch actions from the user.
Here is the image which I will explain the situation with :
1: I took a message in the chat
2: I tryed to find it in the game's memory with Cheat-Engine
3: By examining every addresses where it was found, I ended up to this one, which contains the chat formated with what seems to be html..
That part is only the bottom part of the chat. (I see the rest of it if I scroll up)
So, I asked myself how could I read game variable to interact with the game.
Another thing I'm trying to achieve is to catch the user's actions so I can display some information in a winform.
I've just read about packet sniffing, it seems interesting for what I'm trying to do.
I tryed to read packets going in and out of this app with WireShark. Every action in game was sending a few packets, but I couldn't read them as they were just a bunch of weird characters. I tryed to decrypt them using a few methods I got on WireShark's forum without success. I was asking myself, even if I could see them in Wireshark, how am I gonna do that programmatically..
There is certainly a good way to do this, as we often see bots in this game.
Considering the number of bots playing "in team", I'm pretty sure they do not use clicks, but they run something in background that sends requests.
How do you make such a bot that fight, talk, interact with players automatically?
This game is Dofus, powered by Adobe Air.
I usually program with c++ and c#, but I was wondering what's the best way to do this.
I need a kick in the right direction!

Maybe trying a tcp/ip listner control (or use tcplistner class in c#) in your c# project with the appropriate port to catch requests (& responses). Information sended could be compressed so you may want to try some standard algo.
Did you try reverse engineer the AIR app ?

Related

Any way to detect what device a DialogFlow request came from?

So, the basic idea is this:
I make a request to an assistant app to play some audio.
In general, these requests will come from a Google Home device (I know, they could come from a mobile phone etc, but I'm only worried about Home devices right now).
I can easily send a text response back and have it read.
But what I'd rather do it stream an audio file to the source device via standard Chromecasting stuff.
If I make my request something like
"Hey Google, ask {my app} to read me the instructions for {blah} in the {living room}" then it's no problem to pick out "living room" as the target device and send the audio there. That works perfect.
But if the user says
"Hey Google, ask {my app} to read me the instructions for {blah}"
What i'd like is to be able to send the audio to the device the request came from.
I found this question:
Detect request coming from google home using dialogflow
which is close but not exactly what I'm after.
EDIT: I also found the "MEDIA RESPONSE" info, but it looks like that would only be good for a single media response. In my case, I may have several audio clips that need to play back to back, so a single media response wouldn't work. (at least, I don't see a way it would work from what I know).
Is this even possible?
We currently don't have the ability to treat the Home like a Chromecast through an Action.
However, as you note, we do have the ability to use the [Media Response][1]. As you note, it does only send one audio at a time, however, when it is completed Dialogflow will be called with an Event of actions_intent_MEDIA_STATUS which you can create an Intent to capture. You can then send another Media Response with the next song in the playlist. (Or do whatever else you want as part of the conversation.)

Automated system testing for chromecast receiver application

I am wondering if there is a good way of making automated system testing for a Chromecast receiver application?
If you open the application URL in a Chrome browser, the cast_receiver library cannot find the websocket connection on:
ws://localhost:8008/v2/ipc
Since this handles the communication between the app and the Chromecast hardware, I am thinking of something like a Node.js websocket server that can talk to the chromecast receiver app. Is there such a system, or do anyone know if there are plans of google releasing something for this kind of testing?
Also, would there be other problems related to the difference between the chromecast browser and chrome browser? As I understand, the chromecast browser is just a subset of chrome, which makes me think it should work.
No, there is no easy way to do this.
DISCLAIMER: I haven't tried any of what I'm about to suggest. It's also probably a terribly idea as Google could change the protocol any time and in any fashion they desire since it isn't a public thing.
BIG DISCLAIMER: You may be in violation of the ToS by doing this as Section 3.2 (Developer Policies) states that you "may not ... develop a standalone technology ... any functionality of any Google Cast Receiver". Possibly, you'd be making a standalone piece of technology that replicated the IPC functionality. But I don't know. I'm not a lawyer.
If you want to go and do this, I'd suggest making a copy of the Google Cast Receiver SDK (www.gstatic.com/cast/sdk/libs/receiver/2.0.0/cast_receiver.js as of April 28, 2015) and altering it so that it logs out the messages that are being sent and received.
Luckily, it appears that we have logging messages to help us find the relevant code.
The receiving method has the string "Received message". I would guess that "a.message" is what is being received.
The sending method has the string "IPC message sent". I would guess that "a" is what is being sent.
Once you've instrumented your copy of the code, you need to publish it somewhere that your receiver app can see it and then you need to edit your receiver app to point to your new and improved SDK. Please please please make sure that you do this on a non-published app for testing purposes only.
Once that is done, you need to find some way to get your messages out of the code and into something that you can access. You have a few options.
Fiddle around with the code more and figure out how to get the Chromecast to log out the data you want;
Store the information in an array and read it using the debugger;
Open your own socket (or websocket) and send that data to a server that you control.
From here, you can run your app, interact with it, and then have a complete record of the IPC messages that were sent and received. Armed with this, you can create your own Fake-IPC server that listens for specific messages and spits out the stuff that is in your log.

Possible to record the Stage to a movie file?

I have a multiuser flex application. The application includes voice chat from a streaming server as well as various other dynamic interactions.
I was wondering if it is possible to capture the Experience in the context of a particular user and write it out to a video file for offline playback / sharing / etc... Something similar to Recording a stream from a Camera object, but only the Stage is the input device... I can't think of any way to do this, so I'm putting it out there for the smart people. Thank you in advanced.
The only thing I have found is this: http://www.zeropointnine.com/blog/simpleflvwriteras-as3-class-to-create-flvs/ But is it quite old now, so someone must have done something taking advantage of newer features in Flash/AS3/FlashPlayer.

How do I ensure that SOAP requests from a flash client to my ASP server are coming from the flash client?

I have a flash based game that has a high score system implemented with a SOAP service. There are prizes involved and I want to prevent someone from using FireBug or similar to discover the webservice path and submit fake scores.
I considered using some kind of encryption on the data but am aware that someone could decompile the swf and work out how I did it.
I also considered using an IP whitelist but since the incoming data will come from the users IP and not the servers that won't work. (I'm sure I'm missing something obvious here...)
I know that there is a tried and tested solution for this, but I don't seem to be asking google the right questions to get to it.
Any help and suggestions will be appreciated, thank you
What you want to achieve is impossible. You can only make it harder for people to do. The best you can do is to use encryption and encrypt the SWF it self, which usually causes higher filesize and poorer performance.
The safest method is to evaluate or even run the whole game on the server. You can try to determine whether what the client sends you is possible at all. Rather than making sure people use your client, you're making sure people play the game according to your rules.
greetz
back2dos
All security is based on making things hard. It never makes things impossible. How about having your game register with a separate service when it starts up. It could use client information to build some kind of special code that would be unique for each iteration of the game. The game could morph the code in a way that would be hard to emulate. Then when the game is over the score gets submitted with the morphed code and validated on the server side.

Recording from microphone from within the browser using flex or processing

A groups of friends are working on a little game that would listen to the microphone as part of the interaction. We've tinkered with processing and flex. What we'd like to know is if anyone has succeeded in:
recording from the microphone using a web app
performing an FFT on this microphone data
In the case of flex, according to the docs "Because sound data from a microphone...do not pass through the global SoundMixer object, the SoundMixer.computeSpectrum() method will not return data from those sources."1
Your footnote kind of answered your own question. :) No, it is not possible to read the raw bytes from the microphone from the client side. It is possible that Adobe will implement this in Flash 11, but don't hold your breath for it.
If you set up a flash server, such as Red5, then you can read the raw stream on the backend and send FFT data back to the client over AMF. This is actually possible to do with very low latency, though it may still be too high depending on the nature of your application. There are several examples on the Red5 page about how to accomplish things similar to this using a Java webapp working on the backend.
There is a lot of people requesting this feature.
You may see many workaround in getMicrophone().

Resources