Is there a way to store values on client side permanently?
I have a site, flash game (the game is not developed by me, of course), after you registered, it will recognize you even after u close browser, clear cache and cookies, and even restart computer and modem. Where do they store the values? Why the flash game can recognize me after few days?
After research in google, I still cant get the answer. My guess is, is it stored in my computer RAM? How could it possible? If my guess is true, how do we store values in RAM??
FYI:
The flash game is created in : AS3
RAM is not perisistent with reboots. So its not possible to remember anything stored in RAM even after a single reboot. I have read about something called "Local Shared Objects" which are a bit more than the normal ones. Clearing normal caches/cookies wont clear them away.
It's probably stored on the server, and the client is recognised by IP address and maybe also browser id string.
Maybe it's a Flash Cookie?
Can't store in RAM if you're saying that it remembers you after reboot.
Your modem have an IP and also another ID which can be in this form:
c9067688.static.spo.virtua.com.br
Maybe it's taking this ID...
the game recognizes you only after logging in to the site or just open the url..??
if it recognizes you after login, then its probably stored it in a server.
Related
I am trying to write a minifilter that more or less captures everything that happens in the kernel and was wondering if I could also capture "URLs"/network information; I stumbled upon windivert which seems to be using a .sys driver and also another thread which says we cannot get URLs in driver mode which leaves me a bit confused. If it is true then how does windivert do it?
I understand there is something called network redirect under minifilters on learn.microsoft.com which uses a dll and .sys file (same as windivert), but I could not find any resources that can help make me one.
Is there a better way to capture all visited URLs in real time?
Thanks in advance for any help or directions.
You're looking for Windows Filtering Platform and Filtering Platform Callout Drivers, which WinDivert is utilizing. This gives you the data that goes out over the wire, so for plain old HTTP over port 80 you can parse the requests to obtain the URL. This won't work for HTTPS since you're getting encrypted data over the wire; you'd have to implement some kind of MITM interception technique to handle that.
I'm not sure if this is the right place to ask this, but I'll do it anyway.
I have developed an uploader for the Umbraco CMS that lets people upload a queue of files in one go. This uses some simple flash app that just calls a .NET ashx to upload the files one at a time. When one is done, the next one starts.
Recently I've had a user hit a problem where 1 or 2 uploads will go up fine, but then the rest fail. This happens for himself and a client of his. After some debugging, he thinks he's found the problem, but it seems weird so was wondering if anyone else has had this problem?
Both him and his client are on a fibre optic broadband connection so have got really fast upload speeds. When it was tested on a lesser speed broadband connection, all the files were uploaded no problem. According to one of his developer friends, apparently they had come across it before and had to put a slight delay in the upload script to make it work.
Does this sound possible? Had anyone else hit this problem? Is there a known workaround to prevent the uploads from failing?
I have not struck this precise problem before, but I have done a lot of diagnosis of DSL and broadband troubleshooting before, so will do my best to answer this.
There are 2 possible causes for this particular symptom, both generally outside of your network control (I would have thought).
1) Packet loss
Of course where some links receive a very high volume of traffic then they can chose to just dump a lot of data (eg all that is over that link maximum set size), but TCP/IP should be controlling that, and also expecting that sort of thing to drop from time to time, so this seems less likely.
2) The receiving server
May have some HTTP bottlenecks into that server or even the receiving server CPU / RAM etc, may be at capacity.
From a troubleshooting perspective, even if these symptoms shouldn't (in theory) exist, the fact that they do, and you have a specific
Next steps if you really need to understand how it is all working might be to get some sort of packet sniffer (like WireShark) to try to work out at a packet level what exactly is happening.
Also Socket programming can often program directly to the TCP/IP sockets, so you would be processing at the lower network layers, and seeing the responses and timeouts etc.
Also if you control the receiving server, then you can do the same from that end, or at least review the error logs to see what is getting thrown up as a problem.
A really basic method could be to send a pathping to the receiving server if that is possible, and that might highlight slow nodes getting the server, or packet loss between your local machine and the end server.
The upshot? Put in a slow down function in the upload code, and that should at least make the code work.
Get in touch if you need any analysis of the WireShark stuff.
I have run into a similar problem with an MVC2 website using Flash uploader and Firefox. The servers were load balanced with a Big-IP load balancer. What we found out in debugging this is that Flash, in Firefox, did not send the session ID on continuation requests and the load balancer would send continuation requests off to another server. Because the user had no session on the new server, the request failed.
If a file could be sent in one chunk, it would upload fine. If it required a second chunk, it failed. Because of this the upload would fail after an undetermined number of files being uploaded.
To fix it, I wrote a Silverlight uploader.
Is there any way to run an NBD (Network Block Device) client and server on the same machine without deadlocking the system?
I am very exhausted looking to find an answer for this. I appreciate if anyone can help.
UPDATE:
I'm writing an NBD server that talks to Google Storage system. I want to mount a file system on the NBD and backup my files. I will be hugely disappointed if I have to end up running the server on another machine. Few ideas I already had seem to lead nowhere:
telling the file system to open the block device using O_DIRECT flag to bypass the linux buffer cache
using a raw device (unfortunately, raw devices are character devices and FSes refuse to use them as underlying device)
Just for the record, having the NBD client and server on the same machine has been possible since 2008.
Use a virtual machine (not a container) - you need two kernels, but you don't need two physical machines.
Since the front page of the Sourceforge project for NBD say that a deadlock will happen "within seconds" in this scenario, I'm guessing the answer is a big "No."
Try to write a more complete question of what actual goal you're trying to accomplish. There's some times that you need to bang away at a little problem, and some times that you need to look at the big picture.
Okay so, lets say I have an integer.
When I execute the program, that integer gets an address.
Makes sense.
But, there is many programs out there. Lets see, when creating any game hack, lets say minesweeper I find address of where that data stored and change it.
But... That hack, that simple hack which just changing some address... Works on every computer and every-time.
The question is, that data is getting same address every-time.
And on my computer, there is about 30 exe running now.
Don't other programs want that address ? What If they want that address ? Why that hack works every-time ? Why other programs dont want that very same address ? How its working every-time ?
Every application gets it's own virtual addressing space (4GB on 32 bit machines) to overcome that problem in a multitasking operating system.
Here is a pretty good article covering the subject.
Your "hack" is probably locating a process using something like OpenProcess and editing the memory using WriteProcessMemory. That's why it works on "all" machines.
Basically, you need to read about virtual memory. The purpose of virtual memory is to abstract away the physical address space, and give each process (i.e. each application) its own "virtual" address space, which avoids the problem that you're describing.
If your minesweeper hack consists of manipulating data stored on a specified static address, there's no way it would work on every computer.. Program memory allocation is OS dependent.
Is it possible to know network card id of the user host computer from where the request is coming like IP address. I am interesting to know if it is possible at IIS or asp.net level or any other possible way of knowing it?
As far as getting network card information is concerned, I see little hope for you here seeing as a client's hardware profile is not something naturally pushed down the wire as a matter of course, however see:
HttpContext.Current.Request.UserHostAddress
Or
HttpContext.Current.Request.ServerVariables("remote_addr")
This value will give you the IP address of the calling client, although they may be hitting you through a proxy and therefore can't be guaranteed to be a machine specific address.
If by "network card ID" you mean the Ethernet MAC address, that's assuming a particular technology on the remote side that you have no way of knowing whether or not it is used. Sure, Ethernet is used pretty much everywhere these days, but are you willing to limit yourself to clients that use that particular hardware architecture? So even if it were possible, I doubt you'd want to go down that route.
If what you want is a unique identifier per client computer, you are probably better off issuing some sort of token yourself. A cookie with a randomly generated session ID should work fairly well.