Have run into huge difficulties while trying to make a gRPC app with mulitple services and have been unable to find the cause of my issues. Starting to think it may be in the structure of the proto code.
message Room {
enum Location {
Unknown = 0;
Hallway = 1;
MaleWC = 2;
FemaleWC = 3;
Office = 4;
Kitchen = 5;
ConferenceRoom = 6;
}
Location room = 1;
int32 capacity = 2;
int32 population = 3;
bool light = 4;
bool lock = 5;
}
service lightandtempcontrol {
// create a name for the rpc first -> then specify type of message to send to server
// to turn a light on upon somebody entering a room
rpc LightsOn(targetLight) returns (emptyMessage) {}; // Unary*
// to turn a light off upon someone entering the room
rpc LightsOff(targetLight) returns (lightReceipt) {};
//to regulate temperature dependent on the temperature outside
rpc OutsideTemp(stream outTemp) returns (stream indoorTemp){}; // Bidirectional*
//to get a complete
rpc Usage(usageRequest) returns (stream lightUsage) {}; // Server streaming example
}
Basically, I am trying to create a Room class that I can use with multiple services, getting and setting fields across those services but everytime I try to base code off of one of these I am getting undesired results, mostly because the values I want to change or compare are not working at all. Is the issue in the structure of my proto. is it possible to use this Room message inside other services outside of lightandtempcontrol?
Related
I am trying to consume a max of 1000 messages from kafka at a time. (I am doing this because i need to batch insert into MSSQL.) I was under the impression that kafka keeps an internal queue which fetches messages from the brokers and when i use the consumer.consume() method it just checks if there are any messages in the internal queue and returns if it finds something. otherwise it just blocks until the internal queue is updated or until timeout.
I tried to use the solution suggested here: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1164#issuecomment-610308425
but when i specify TimeSpan.Zero (or any other timespan up to 1000ms) the consumer never consumes any messages. but if i remove the timeout it does consume messages but then i am unable to exit the loop if there are no more messages left to be read.
I also saw an other question on stackoverflow which suggested to read the offset of the last message sent to kafka and then read messages until i reach that offset and then break from the loop. but currently i only have one consumer and 6 partitions for a topic. I haven't tried it yet but i think managing offsets for each of the partition might make the code messy.
Can someone please tell me what to do?
static List<RealTime> getBatch()
{
var config = new ConsumerConfig
{
BootstrapServers = ConfigurationManager.AppSettings["BootstrapServers"],
GroupId = ConfigurationManager.AppSettings["ConsumerGroupID"],
AutoOffsetReset = AutoOffsetReset.Earliest,
};
List<RealTime> results = new List<RealTime>();
List<string> malformedJson = new List<string>();
using (var consumer = new ConsumerBuilder<Ignore, string>(config).Build())
{
consumer.Subscribe("RealTimeTopic");
int count = 0;
while (count < batchSize)
{
var consumerResult = consumer.Consume(1000);
if (consumerResult?.Message is null)
{
break;
}
Console.WriteLine("read");
try
{
RealTime item = JsonSerializer.Deserialize<RealTime>(consumerResult.Message.Value);
results.Add(item);
count += 1;
}
catch(Exception e)
{
Console.WriteLine("malformed");
malformedJson.Add(consumerResult.Message.Value);
}
}
consumer.Close();
};
Console.WriteLine(malformedJson.Count);
return results;
}
I found a workaround.
For some reason the consumer first needs to be called without a timeout. That means it will wait for a message until it gets at least one. after that using consume with timeout zero fetches all the rest of the messages one by one from the internal queue. this seems to work out for the best.
I had a similar problem, updating the Confluent.Kafka and lidrdkafka libraries from version 1.8.2 to 2.0.2 helped
I'm working with multi-program UDP MPEG-2 TS streams that, -unfortunately- dynamically re-map their elementary stream PIDs at random intervals. The stream is being demuxed using Microsoft's MPEG-2 demultiplexer filter.
I'm using the PSI-Parser filter (an example filter included in the DirectShow base classes) in order to react to the PAT/PMT changes.
The code is properly reacting to the change, yet I am experiencing some odd crashes (heap memory corruption) right after I remap the Demuxer pins to their new ID's. (The re-mapping is performed inside the thread that is processing graph events, while the EC_PROGRAMCHANGED message is being processed).
The crash could be due to faulty code in my part, yet I have not found any reference that tells me if changing the pin PID mapping is safe while the graph is running.
Can anyone provide some info if this is operation is safe, and if it is not, what could I do to minimize capture disruption?
I managed to find the source code for a Windows CE version of the demuxer filter. Inspecting it, indeed, it seems that it is safe to remap a pin while the filter is running.
I also managed to find the source of my problems with the PSI-Parser filter.
When a new transport stream is detected, or the PAT version changes, the PAT is flushed, (all programs are removed, the table is re-parsed and repopulated).
There is a subtle bug within the CPATProcessor::flush() method.
//
// flush
//
// flush an array of struct: m_mpeg2_program[];
// and unmap all PMT_PIDs pids, except one: PAT
BOOL CPATProcessor::flush()
{
BOOL bResult = TRUE;
bResult = m_pPrograms->free_programs(); // CPrograms::free_programs() call
if(bResult == FALSE)
return bResult;
bResult = UnmapPmtPid();
return bResult;
}// flush
Here's the CPrograms::free_programs() implementation.
_inline BOOL free_programs()
{
for(int i= 0; i<m_ProgramCount; i++){
if(!HeapFree(GetProcessHeap(), 0, (LPVOID) m_programs[i] ))
return FALSE;
}
return TRUE;
}
The problem here is that the m_ProgramCount member is never cleared. So, -apart from reporting the wrong number of programs in the table after a flush (since it is updated incrementally for each program found in the table)-, the next time the table is flushed, it will try to release memory that was already released.
Here's my updated version that fixes the heap corruption errors:
_inline BOOL free_programs()
{
for(int i= 0; i<m_ProgramCount; i++){
if(!HeapFree(GetProcessHeap(), 0, (LPVOID) m_programs[i] ))
return FALSE;
}
m_ProgramCount = 0; // This was missing, next call will try to free memory twice
return TRUE;
}
I am writing a UWP application, I am using the BluetoothLEAdvertisementWatcher method to capture advertising from BLE devices around. This all works fine and I can build a list of devices by capturing the BluetoothLEAdvertisementReceivedEventArgs..... like below
private async void LockerAdv_Received(BluetoothLEAdvertisementWatcher sender, BluetoothLEAdvertisementReceivedEventArgs args)
{
await Dispatcher.RunAsync(Windows.UI.Core.CoreDispatcherPriority.Normal, async () =>
{
ServiceUuidsFound += args.Advertisement.ServiceUuids.Count;
Adverts.Add(args);
However, I want to capture the ServiceData UUIDs carried in the advertising data (in our case 0x180f for the battery service data and 0xb991 for our own service data).
If I capture the advertising and examine the Advertisement.ServiceUuids.Count as shown above the count is always zero even though I know there are two ServiceData UUIDs present and Apps like the Nordic nRF app finds them and displays them.
Also, if I use the debugger to stop execution and examine the Advertisement.ServiceUuids then they appear not to have been captured and are certainly not accessible as can be seen below:
Link to screenshot.
I have tried using
ScanningMode = BluetoothLEScanningMode.Active;
and
ScanningMode = BluetoothLEScanningMode.Passive;
and it makes no difference.
Ultimately what I want is to be able to extract the ServiceData from the advertising data as it contains useful data for our application but if Windows won't even report the existence of the UUIDs then I am certain I can't get the data associated with it!!
So, what I need to know is it me doing something wrong? is it a limitation of Windows 10 (I am using the very latest version)? or is it perhaps an issue with the Dell Optiflex I am using?
Any help would be gratefully received
You are doing noting wrong. The debugger and watch just don't let you dig down deeper and show no native view.
Put the items you need in a list first and after that you can find out if the lists contain the items and even other collections with more items.
below is an example that shows you how. I think it does not cover all items, thats up to you:
private async void OnAdvertisementReceivedAsync(BluetoothLEAdvertisementWatcher watcher,
BluetoothLEAdvertisementReceivedEventArgs eventArgs)
{
//we have to stop the watcher to get the data from one advertising device only.
var device = await BluetoothLEDevice.FromBluetoothAddressAsync(eventArgs.BluetoothAddress);
if (device != null)
/* Check all advertisement items for null!
* Not all of them are present!
* Null check is not done in this example!*/
{
var TimeStamp = eventArgs.Timestamp.DateTime;
var LocalName = eventArgs.Advertisement.LocalName;
var Name = device.Name;
var BleAdress = eventArgs.BluetoothAddress;//ulong
var Rssi = eventArgs.RawSignalStrengthInDBm.ToString();
var ConnectionStatus = device.ConnectionStatus;
var Access = device.DeviceAccessInformation.CurrentStatus;
/* Shows advertising flags:
LimitedDiscoverableMode = 1,
GeneralDiscoverableMode = 2,
ClassicNotSupported = 4,
DualModeControllerCapable = 8,
DualModeHostCapable = 16.
*/
var flags = eventArgs.Advertisement.Flags.ToString();
var AdvNumberOfDataSections = eventArgs.Advertisement.DataSections.Count;
/*AdvDataSections contains the advertisement data */
List<BluetoothLEAdvertisementDataSection> AdvDataSections = new List<BluetoothLEAdvertisementDataSection>();
foreach (var item in eventArgs.Advertisement.DataSections)
{
AdvDataSections.Add(item);
}
List<BluetoothLEManufacturerData> AdvManufacturerData = new List<BluetoothLEManufacturerData>();
foreach (var item in eventArgs.Advertisement.ManufacturerData)
{
AdvManufacturerData.Add(item);
}
List<GattDeviceService> ServicesList = new List<GattDeviceService>();
var services = await device.GetGattServicesAsync(BluetoothCacheMode.Uncached);
if (services != null)
{
foreach (var item in services.Services)
{
ServicesList.Add(item);
}
}
}
/* Start the watcher again to get other devices or missing services or data */
}
What tools or techniques can I use to protect my ASP.NET web application from Denial Of Service attacks
For sure a hardware solution is the best option to prevent DOS attacks, but considering a situation in which you have no access to hardware config or IIS settings, this is definitely why a developer must have something handy to block or at least decrease dos attack effect.
The core concept of logic relies on a FIFO (First In First Out) collection such as Queue, but as it has some limitations I decided to create my own collection.
Without discussing more details this is the complete code I use:
public class AntiDosAttack
{
readonly static List<IpObject> items = new List<IpObject>();
public static void Monitor(int Capacity, int Seconds2Keep, int AllowedCount)
{
string ip = HttpContext.Current.Request.UserHostAddress;
if (ip == "")
return;
// This part to exclude some useful requesters
if(HttpContext.Current.Request.UserAgent != null && HttpContext.Current.Request.UserAgent == "Some good bots")
return;
// to remove old requests from collection
int index = -1;
for (int i = 0; i < items.Count; i++)
{
if ((DateTime.Now - items[i].Date).TotalSeconds > Seconds2Keep)
{
index = i;
break;
}
}
if (index > -1)
{
items.RemoveRange(index, items.Count - index);
}
// Add new IP
items.Insert(0, new IpObject(ip));
// Trim collection capacity to original size, I could not find a better reliable way
if (items.Count > Capacity)
{
items.RemoveAt(items.Count - 1);
}
// Count of currect IP in collection
int count = items.Count(t => t.IP == ip);
// Decide on block or bypass
if (count > AllowedCount)
{
// alert webmaster by email (optional)
ErrorReport.Report.ToWebmaster(new Exception("Blocked probable ongoing ddos attack"), "EvrinHost 24 / 7 Support - DDOS Block", "");
// create a response code 429 or whatever needed and end response
HttpContext.Current.Response.StatusCode = 429;
HttpContext.Current.Response.StatusDescription = "Too Many Requests, Slow down Cowboy!";
HttpContext.Current.Response.Write("Too Many Requests");
HttpContext.Current.Response.Flush(); // Sends all currently buffered output to the client.
HttpContext.Current.Response.SuppressContent = true; // Gets or sets a value indicating whether to send HTTP content to the client.
HttpContext.Current.ApplicationInstance.CompleteRequest(); // Causes ASP.NET to bypass all events and filtering in the HTTP pipeline chain of execution and directly execute the EndRequest event.
}
}
internal class IpObject
{
public IpObject(string ip)
{
IP = ip;
Date = DateTime.Now;
}
public string IP { get; set; }
public DateTime Date { get; set; }
}
}
The internal class is designed to keep the date of request.
Naturally DOS Attack requests create new sessions on each request while human requests on a website contain multiple requests packed in one session, so the method can be called in Session_Start.
usage:
protected void Session_Start(object sender, EventArgs e)
{
// numbers can be tuned for different purposes, this one is for a website with low requests
// this means: prevent a request if exceeds 10 out of total 30 in 2 seconds
AntiDosAttack.Monitor(30, 2, 10);
}
for a heavy request website you may change seconds to milliseconds but consider the extra load caused by this code.
I am not aware if there is a better solution to block intentional attacks on website, so I appreciate any comment and suggestion to improve the code. By then I consider this as a best practice to prevent DOS attacks on ASP.NET websites programmatically.
Try the Dynamic IP Restriction extension http://www.iis.net/download/dynamiciprestrictions
Not a perfect solution, but helps raise the bar =)
It's a broad area, so if you can be more specific about your application, or the level of threat you're trying to protect against, I'm sure more people can help you.
However, off the bat, you can go for a combination of a caching solution such as Squid: http://www.blyon.com/using-squid-proxy-to-fight-ddos/, Dynamic IP Restriction (as explained by Jim) and if you have the infrastructure, an active-passive failover setup, where your passive machine serves placeholder content which doesnt hit your database / any other machines. This is last-defence, so that you minimise the time a DDOS might bring your entire site offline for.
I've implemented an HttpModule that intercepts the Response stream of every request and runs a half dozen to a dozen Regex.Replace()s on each text/html-typed response. I'm concerned about how much of a performance hit I'm incurring here. What's a good way to find out? I want to compare speed with and without this HttpModule running.
I've a few of these that hook into the Response.Filter stream pipeline to provide resource file integration, JS/CSS packing and rewriting of static files to absolute paths.
As long as you test your regexes in RegexBuddy for speed over a few million iterations, ensure you use RegexOptions.Compiled, and remember that often the quickest and most efficient technique is to use a regex to broadly identify matches and then use C# to hone that to exactly what you need.
Make sure you're also caching and configuration that you rely upon.
We've had a lot of success with this.
Http module is just common piece of code, so you can measure time of execution of this particular regex replace stuff. It is enough. Have a set of typical response streams as input of your stress test and measure executing of the replace using Stopwatch class. Consider also RegexOptions.Compiled switch.
Here are a few ideas:
Add some Windows performance counters, and use them to measure and report average timing data. You might also increment a counter only if the time measurement exceeds a certain threshold. and
Use tracing combined with Failed Request Tracing to collect and report timing data. You can also trigger FRT reports only if page execution time exceeds a threshold.
Write a unit test that uses the Windows OS clock to measure how long your code takes to execute.
Add a flag to your code that you can turn on or off with a test page to enable or disable your regex code, to allow easy A/B testing.
Use a load test tool like WCAT to see how many page requests per second you can process with and without the code enabled.
I recently had to do some pef tests on an HTTPModule that I wrote and decided to perform a couple of load tests to simulate web traffic and capture the performance times with and without the module configured. It was the only way I could figure to really know the affect of having the module installed.
I would usually do something with Apache Bench (see the following for how to intsall, How to install apache bench on windows 7?), but I had to also use windows authentication. As ab only has basic authentication I it wasn't a fit for me. ab is slick and allows for different request scenarios, so that would be the first place to look. One other thought is you can get a lot of visibility by using glimpse as well.
Being that I couldn't use ab I wrote something custom that will allow for concurrent requests and test different url times.
Below is what I came up with to test the module, hope it helps!
// https://www.nuget.org/packages/RestSharp
using RestSharp;
using RestSharp.Authenticators;
using RestSharp.Authenticators.OAuth;
using RestSharp.Contrib;
using RestSharp.Deserializers;
using RestSharp.Extensions;
using RestSharp.Serializers;
using RestSharp.Validation;
string baseUrl = "http://localhost/";
void Main()
{
for(var i = 0; i < 10; i++)
{
RunTests();
}
}
private void RunTests()
{
var sites = new string[] {
"/resource/location",
};
RunFor(sites);
}
private void RunFor(string[] sites)
{
RunTest(sites, 1);
RunTest(sites, 5);
RunTest(sites, 25);
RunTest(sites, 50);
RunTest(sites, 100);
RunTest(sites, 500);
RunTest(sites, 1000);
}
private void RunTest(string[] sites, int iterations, string description = "")
{
var action = GetAction();
var watch = new Stopwatch();
// Construct started tasks
Task<bool>[] tasks = new Task<bool>[sites.Count()];
watch.Start();
for(int j = 0; j < iterations; j++)
{
for (int i = 0; i < sites.Count(); i++)
{
tasks[i] = Task<bool>.Factory.StartNew(action, sites[i]);
}
}
try
{
Task.WaitAll(tasks);
}
catch (AggregateException e)
{
Console.WriteLine("\nThe following exceptions have been thrown by WaitAll()");
for (int j = 0; j < e.InnerExceptions.Count; j++)
{
Console.WriteLine("\n-------------------------------------------------\n{0}", e.InnerExceptions[j].ToString());
}
}
finally
{
watch.Stop();
Console.WriteLine("\"{0}|{1}|{2}\", ",sites.Count(), iterations, watch.Elapsed.TotalSeconds);
}
}
private Func<object, bool> GetAction()
{
baseUrl = baseUrl.Trim('/');
return (object obj) =>
{
var str = (string)obj;
var client = new RestClient(baseUrl);
client.Authenticator = new NtlmAuthenticator();
var request = new RestRequest(str, Method.GET);
request.AddHeader("Accept", "text/html");
var response = client.Execute(request);
return (response != null);
};
}