I'm developing a game, and I want to track how many bullets are shot in the game. It's not necessarily be an exact number, but an estimate number would be okay for me. My options are
Log a bullet_shoot event whenever a player fires a bullet. (Not not not good for the network bandwith and performance)
Have a timer (like 20 seconds intervals). Check if there's any bullets fired : If so, log the amount.
Have a treshold (25 for example). If the player fires that amount, log 25. Wait for the next bundle
What's your best approach for this?
Related
Hei guys,
I am creating a script that will send signals to a Telegram group from which Cornix will read those signals and, through the APIs, send those "trades" to your set exchange.
So a classic Signal Channel on TG.
Problem I am getting is that when an entry price gets detected, the exchange does not get that trade in time.
By that I mean it takes too much time for the script to send the signal on Telegram and too much time for Cornix to send the trade through.
By the time Cornix opens the limit order that entry price has already changed so I have to hope that the price gets hit again, which as you can guess, is not ideal.
Currently I am using the 5 TF. Tried the 15 minutes timeframe. Both have the same issue.
Is there a way to fix this delay or somehow reduce it?
Thanks
I’ve also noticed delays from TV to my bots and it doesn’t really matter which ones however I have noticed that TV signals mostly are delayed a few seconds in time, then I’m my case it goes to a bot and then from bot to exchange so there will naturally be a delay from signal to exchange and actually placing the order. Bot to binance has taken some 1.5 minutes to trade and you are right sometimes the price moves quickly, maybe set market orders to buy and see how it goes. I’d be interested if you find a quicker solution
I am testing the Squish software for automated testing and trying to record test cases. I have an application that has mapped several of the keys to perform certain tasks within the application like PgDn moves focus clockwise or [ moves a small screen element up. When I click the record button in squish, it only records keypress events for keys that are not mapped - severely limiting my test case creation capability. The only thing I found was two comments from 3+ years ago here with no response or resolution. When I watch the sponsored tutorials, every keypress is captured in their resulting script. Has anyone experienced this and found a workaround?
I have an application using DynamoDB and I noticed they just implemented autoscaling which is awesome. I love the concept and the timing for my app is pretty perfect. However I am still getting some issues that I wonder if I can't tweak settings to remove.
My application gets definite spikes in usage so I think this is an ideal thing to use, however with autoscaling on I still am getting some throttling. Here is my read graphs for the last 12 hours:
As you can see, when it spikes the usage is set low, so it throttles for a minute or two until the update kicks in, then works. That's ok I guess and better than not scaling, but I would like it not to throttle at all...
Is there any way to tell DynamoDB to never throttle unless it goes over 100 (or 200 or whatever I set as the top limit)? Just if it gets a surge turn up the throughput for 15 minutes or whatever until the surge is over?
Autoscaling uses CloudWatch. You can see these alarms by going to the CloudWatch dashboard and look for the alarms that include your table name and "DO NOT EDIT OR DELETE" in the description.
Why am I telling you this?
Well, the CloudWatch has some minimal period granularity. Currently it's 1 minute. It means that it will wait at least for 1 minute before firing any event to its listeners. Therefore, it will take at least one minute after the load starts until the capacity is increased. Actually it will be even more, since increasing the capacity also takes time. Bottom line: if you have very large spike, some requests may be throttled, since the the auto-scaling will not yet take effect and bursting can be exhausted.
The simple, but costly, solution will be increasing the initial capacity.
If you know about the upcoming spike in advance (e.g. you have some job running periodically, or customer peaks in certain times) you can use API to modify autoscaling programmatically.
I’m developing a multiplayer network pong game, my first game ever. The current state is, I’ve running the physic engine with the same configurations on the server and the clients. The own paddle movement is predicted and get just confirmed by the authoritative server. Is a difference detected between them, I correct the position at the client by interpolation. The opponent paddle is also interpolated 200ms to 100ms in the past, because the server is broadcasting snapshots every 100ms to each client.
So far it works very well, but now I have to simulate the ball and have a problem to understanding the procedure.
I’ve read Valve’s (and many other) articles about fast-paced multiplayer several times and understood their approach. Maybe I can compare my ball with their bullets, but their advantage is, the bullets are not visible. When I have to display the ball, and see my paddle in the present, the opponent in the past and the server is somewhere between it, how can I synchronize the ball over all instances and ensure, that the it got ever hit by the paddle even if the paddle is fast moving? Currently my ball’s position is simply set by a server update, so it can happen, that the ball bounces back, even if the paddle is some pixel away (because of a delayed server position).
Until now I’ve got no synced clock over all instances. I’m sending a client step index with each update to the server. If the server did his job, he sends the snapshot with the last step index of each client back to the clients. Now I’m looking for the stored position at the returned step index and compare them. Do I need a common clock to sync the ball?
EDIT:
I've tried to sync a common clock for the server and all clients with a timestamp. But I think it's better to use an own stepping instead of a timestamp (so I don't need to calculate with the ping and so on - and the timestamp will never be exact). The physics are running 60 times per second and now I use this for keeping them synchronized. Is that a good way?
When the ball gets calculated by each client, the angle after bouncing can differ because of the different position of the paddles (the opponent is 200ms in the past). When the server is sending his ball position, velocity and angle (because he knows the position of each paddle and is authoritative), the ball could be in a very different position because of the different angles after bouncing (because the clients receive the server data after 100ms). How is it possible to interpolate such a huge difference?
As you are developing your first game, I think you should try the simplest but brute-force method first. Then you will experience the first exciting result, then you will get courage and try the better methods.
Like Source Engine method, process the game play in one side and send every object state to the other for each 1/30 second. This is a brute method but it works in LAN environment.
Now you will find problems that occur WAN environment where latency is more than 1/30 second.
I am not sure it works actually, but I think that:
Assume that the movement of ball does not change by only player's hit.
Send ball position P, velocity and player A position only when player hits the ball to B.
At B, receive it but process it as if time L is already passed (L=latency between A and B * 2) However, rendered ball should be keep its previous movement until it reaches the ball position P.
Values which never are affected can be masqueraded, even if it is time value. :)
I'm using JMeter for load testing. I'm going through and exercise of finding the max number of concurrent threads (users) that our webserver can handle by simply increasing the # of threads in my distributed JMeter test case, and firing off the test.
Then -- it struck me, that while the MAX number may be useful, the REAL number of users that my website actually handles on average is the number I need to make the test fruitful.
Here are a few pieces of information about our setup:
This is a mixed .NET/Classic ASP site. Upon login, a browser session (with timeout) is created in both for the users.
Each session times out after 60 minutes.
Is there a way using this information, IIS logs, performance counters, and/or some calculation that will help me determine the average # of concurrent users we handle on our production site?
You might use logparser with the QUANTIZE function to determine the peak number of requests over a suitable interval.
For a 10 second window, it would be something like:
logparser "select quantize(to_localtime(to_timestamp(date,time)), 10) as Qnt,
count(*) as Hits from yourLogFile.log group by Qnt order by Hits desc"
The reported counts won't be exactly the same as threads or users, but they should help get you pointed in the right direction.
The best way to do exact counts is probably with performance counters, but I'm not sure any of the standard ones works like you would want -- you'd probably need to create a custom counter.
I can see a couple options here.
Use Performance Monitor to get the current numbers or have it log all day and get an average. ASP.NET has a Requests Current counter. According to this page Classic ASP also has a Requests current, but I've never used it myself.
Run the IIS logs through Log Parser to get the total number of requests and how long each took. I'm thinking that if you know how many requests come in each hour and how long each took, you can get an average of how many were running concurrently.
Also, keep in mind that concurrent users isn't quite the same as concurrent threads on the server. For one, multiple threads will be active per user while content like images is being downloaded. And after that the user will be on the page for a few minutes while the server is idle.
My suggestion is that you define the stop conditions first, such as
Maximum CPU utilization
Maximum memory usage
Maximum response time for requests
Other key parameters you like
It is really subjective to choose the parameters and I personally cannot provide much experience on that.
Secondly you can see whether performance counters or IIS logs can map to the parameters. Then you set up proper mappings.
Thirdly you can start testing by simulating N users (threads) and see whether the stop conditions hit. If not hit, you can go to a higher number. If hit, you can use a smaller number. Recursively you will find a rough number.
However, that never means your web site in real world can take so many users. No simulation so far can cover all the edge cases.