I am trying to write a Flink program to process a Kinesis Stream. The Kinesis stream comes from AWS DynamoDB stream and represents inserts made in DynamoDB table.
Each record in the Stream can contain multiple insert records. The number of insert records can be variable ( can vary from 1 to 10)
I want to group all the insert records from all the streams within a interval of 1 min and sum the impression count (impressionCount) field
[
{
"country":"NL",
"userOS":"mac",
"createdOn":"2017-08-02 16:22:17.135600",
"trafficType":"D",
"affiliateId":"87",
"placement":"4",
"offerId":"999",
"advertiserId":"139",
"impressionCount":"1",
"uniqueOfferCount":"0"
},
{
"country":"NL",
"userOS":"mac",
"createdOn":"2017-08-02 16:22:17.135600",
"trafficType":"D",
"affiliateId":"85",
"placement":"4",
"offerId":"688",
"advertiserId":"139",
"impressionCount":"1",
"uniqueOfferCount":"0"
}
]
My code:
DataStream<List> kinesisStream = env.addSource(new FlinkKinesisConsumer<>(
"Impressions-Stream", new RawImpressionLogSchema(), consumerConfig));
/** CLASS: RawImpressionLogSchema **/
public class RawImpressionLogSchema implements DeserializationSchema<List> {
#Override
public List<RawImpressionLogRecord> deserialize(byte[] bytes) {
return RawImpressionLogRecord.parseImpressionLog(bytes);
}
#Override
public boolean isEndOfStream(List event) {
return false;
}
#Override
public TypeInformation<List> getProducedType() {
return TypeExtractor.getForClass(List.class);
}
}
/** parse Method **/
public static List<RawImpressionLogRecord> parseImpressionLog(
byte[] impressionLogBytes) {
JsonReader jsonReader = new JsonReader(new InputStreamReader(
new ByteArrayInputStream(impressionLogBytes)));
JsonElement jsonElement = Streams.parse(jsonReader);
if (jsonElement == null) {
throw new IllegalArgumentException(
"Event does not define a eventName field: "
+ new String(impressionLogBytes));
} else {
Type listType = new TypeToken<ArrayList<RawImpressionLogRecord>>(){}.getType();
return gson.fromJson(jsonElement, listType);
}
}
I was able to parse the input and create the kinesisStream. Wanted to know is it the correct way ? and how do I achieve the aggregation.
Also once I have the DataStream, how can I apply the map/filter/group by function on List Stream.
I am new to Flink and any help would be appreciated.
Update
Tried to come with the below code to solve the above use case. But somehow the reduce function is not getting called. Any idea what is wrong in the below code ?
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
DataStream<List<ImpressionLogRecord>> rawRecords = env.addSource(new ImpressionLogDataSourceFunction("C:\\LogFiles\\input.txt"));
DataStream<ImpressionLogRecord> impressionLogDataStream = rawRecords
.flatMap(new Splitter())
.assignTimestampsAndWatermarks(
new BoundedOutOfOrdernessTimestampExtractor<ImpressionLogRecord>(Time.seconds(5)) {
#Override
public long extractTimestamp(
ImpressionLogRecord element) {
return element.getCreatedOn().atZone(ZoneOffset.systemDefault()).toInstant().toEpochMilli();
}
}
);
//impressionLogDataStream.print();
KeyedStream<ImpressionLogRecord, String> keyedImpressionLogDataStream = impressionLogDataStream
.keyBy(impressionLogRecordForKey -> {
StringBuffer groupByKey = new StringBuffer();
groupByKey.append(impressionLogRecordForKey.getCreatedOn().toString().substring(0, 16));
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getOfferId());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getAdvertiserId());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getAffiliateId());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getCountry());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getPlacement());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getTrafficType());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getUserOS());
System.out.println("Call to Group By Function===================" + groupByKey);
return groupByKey.toString();
});
//keyedImpressionLogDataStream.print();
DataStream<ImpressionLogRecord> aggImpressionRecord = keyedImpressionLogDataStream
.timeWindow(Time.minutes(5))
.reduce((prevLogRecord, currentLogRecord) -> {
System.out.println("Calling Reduce Function-------------------------");
ImpressionLogRecord aggregatedImpressionLog = new ImpressionLogRecord();
aggregatedImpressionLog.setOfferId(prevLogRecord.getOfferId());
aggregatedImpressionLog.setCreatedOn(prevLogRecord.getCreatedOn().truncatedTo(ChronoUnit.MINUTES));
aggregatedImpressionLog.setAdvertiserId(prevLogRecord.getAdvertiserId());
aggregatedImpressionLog.setAffiliateId(prevLogRecord.getAffiliateId());
aggregatedImpressionLog.setCountry(prevLogRecord.getCountry());
aggregatedImpressionLog.setPlacement(prevLogRecord.getPlacement());
aggregatedImpressionLog.setTrafficType(prevLogRecord.getTrafficType());
aggregatedImpressionLog.setUserOS(prevLogRecord.getUserOS());
aggregatedImpressionLog.setImpressionCount(prevLogRecord.getImpressionCount() + currentLogRecord.getImpressionCount());
aggregatedImpressionLog.setUniqueOfferCount(prevLogRecord.getUniqueOfferCount() + currentLogRecord.getUniqueOfferCount());
return aggregatedImpressionLog;
});
aggImpressionRecord.print();
Working Code
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
DataStream<List<ImpressionLogRecord>> rawRecords = env.addSource(new ImpressionLogDataSourceFunction("C:\\LogFiles\\input.txt"));
//This method converts the DataStream of List<ImpressionLogRecords> into a single stream of ImpressionLogRecords.
//Also assigns timestamp to each record in the stream
DataStream<ImpressionLogRecord> impressionLogDataStream = rawRecords
.flatMap(new RecordSplitter())
.assignTimestampsAndWatermarks(
new BoundedOutOfOrdernessTimestampExtractor<ImpressionLogRecord>(Time.seconds(5)) {
#Override
public long extractTimestamp(
ImpressionLogRecord element) {
return element.getCreatedOn().atZone(ZoneOffset.systemDefault()).toInstant().toEpochMilli();
}
}
);
//This method groups the records in the stream by a user defined key.
KeyedStream<ImpressionLogRecord, String> keyedImpressionLogDataStream = impressionLogDataStream
.keyBy(impressionLogRecordForKey -> {
StringBuffer groupByKey = new StringBuffer();
groupByKey.append(impressionLogRecordForKey.getCreatedOn().toString().substring(0, 16));
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getOfferId());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getAdvertiserId());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getAffiliateId());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getCountry());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getPlacement());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getTrafficType());
groupByKey.append("_");
groupByKey.append(impressionLogRecordForKey.getUserOS());
return groupByKey.toString();
});
//This method aggregates the grouped records every 1 min and calculates the sum of impression count and unique offer count.
DataStream<ImpressionLogRecord> aggImpressionRecord = keyedImpressionLogDataStream
.timeWindow(Time.minutes(1))
.reduce((prevLogRecord, currentLogRecord) -> {
ImpressionLogRecord aggregatedImpressionLog = new ImpressionLogRecord();
aggregatedImpressionLog.setOfferId(prevLogRecord.getOfferId());
aggregatedImpressionLog.setCreatedOn(prevLogRecord.getCreatedOn().truncatedTo(ChronoUnit.MINUTES));
aggregatedImpressionLog.setAdvertiserId(prevLogRecord.getAdvertiserId());
aggregatedImpressionLog.setAffiliateId(prevLogRecord.getAffiliateId());
aggregatedImpressionLog.setCountry(prevLogRecord.getCountry());
aggregatedImpressionLog.setPlacement(prevLogRecord.getPlacement());
aggregatedImpressionLog.setTrafficType(prevLogRecord.getTrafficType());
aggregatedImpressionLog.setUserOS(prevLogRecord.getUserOS());
aggregatedImpressionLog.setImpressionCount(prevLogRecord.getImpressionCount() + currentLogRecord.getImpressionCount());
aggregatedImpressionLog.setUniqueOfferCount(prevLogRecord.getUniqueOfferCount() + currentLogRecord.getUniqueOfferCount());
return aggregatedImpressionLog;
});
aggImpressionRecord.print();
aggImpressionRecord.addSink(new ImpressionLogDataSink());
env.execute();
}
public static class RecordSplitter
implements
FlatMapFunction<List<ImpressionLogRecord>, ImpressionLogRecord> {
#Override
public void flatMap(List<ImpressionLogRecord> rawImpressionRecords,
Collector<ImpressionLogRecord> impressionLogRecordCollector)
throws Exception {
for (int i = 0; i < rawImpressionRecords.size(); i++) {
impressionLogRecordCollector.collect(rawImpressionRecords.get(i));
}
}
}`enter code here`
Related
Custom dimension
`public class GetUsersData {
private static final String APPLICATION_NAME = "Wtr-web";
private static final JsonFactory JSON_FACTORY = JacksonFactory.getDefaultInstance();
private static final String KEY_FILE_LOCATION = "D:\\ga3\\credentials.json";
// private static final String USER_VIEW_ID = "281667139"; // userIdView
private static final String CLIENT_VIEW_ID = "281540591"; // all page
// private static Map<String, String> convertedUserMap = new LinkedHashMap<>();
public UsersData getUsersData() throws Exception {
// Create the DateRange object.
DateRange dateRange = new DateRange();
dateRange.setStartDate("30DaysAgo");
dateRange.setEndDate("today");
return getStatistics(initializeAnalyticsReporting(), dateRange, CLIENT_VIEW_ID);
}
private AnalyticsReporting initializeAnalyticsReporting() throws GeneralSecurityException, Exception {
HttpTransport httpTransport = GoogleNetHttpTransport.newTrustedTransport();
GoogleCredential credential = GoogleCredential.fromStream(new FileInputStream(KEY_FILE_LOCATION))
.createScoped(Collections.singletonList(AnalyticsReportingScopes.ANALYTICS_READONLY));
// Construct the Analytics Reporting service object.
return new AnalyticsReporting.Builder(httpTransport, JSON_FACTORY, credential)
.setApplicationName(APPLICATION_NAME).build();
}
private UsersData getStatistics(AnalyticsReporting service, DateRange dateRange, String userViewId) {
// Create the Metrics object.
Metric visits = new Metric().setExpression("ga:visits");
Dimension UserId = new Dimension().setName("ga:UserID");
// Create the ReportRequest object.
ReportRequest request = new ReportRequest().setViewId(userViewId).setDateRanges(Arrays.asList(dateRange))
.setMetrics(Arrays.asList(visits))
.setDimensions(Arrays.asList(UserId));
// Create the GetReportsRequest object.
GetReportsRequest getReport = new GetReportsRequest().setReportRequests(Arrays.asList(request));
GetReportsResponse response = null;
try {
response = service.reports().batchGet(getReport).execute();
} catch (Exception e) {
e.printStackTrace();
System.out.println(" data not found");
}
if (response != null) {
for (Report report : response.getReports()) {
ColumnHeader columnHeader = report.getColumnHeader();
ClassInfo classInfo = report.getClassInfo();
List<ReportRow> rows = report.getData().getRows();
if (rows == null) {
System.out.println("No data found for the request.");
}
for (ReportRow row : rows) {
List<String> dimensions = row.getDimensions();
List<DateRangeValues> metrics = row.getMetrics();
}
}
}
return new UsersData();
}
}`
com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
POST https://analyticsreporting.googleapis.com/v4/reports:batchGet
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "Unknown dimension(s): ga:UserID\nFor details see https://developers.google.com/analytics/devguides/reporting/core/dimsmets.",
"reason" : "badRequest"
} ],
"message" : "Unknown dimension(s): ga:UserID\nFor details see https://developers.google.com/analytics/devguides/reporting/core/dimsmets.",
"status" : "INVALID_ARGUMENT"
}
UserID is my custom dimension. how can i resolve this issue.
Folks,
I am trying to move data to s3 from Salesforce using apex class. I have been told by the data manager to send the data in zip/gzip format to the S3 bucket for storage cost savings.
I have simply tried to do a request.setCompressed(true); as I've read it compresses the body before sending it to the endpoint. Code below:
HttpRequest request = new HttpRequest();
request.setEndpoint('callout:'+DATA_NAMED_CRED+'/'+URL+'/'+generateUniqueTimeStampforSuffix());
request.setMethod('PUT');
request.setBody(JSON.serialize(data));
request.setCompressed(true);
request.setHeader('Content-Type','application/json');
But no matter what I always receive this:
<Error><Code>XAmzContentSHA256Mismatch</Code><Message>The provided 'x-amz-content-sha256' header does not match what was computed.</Message><ClientComputedContentSHA256>fd31b2b9115ef77e8076b896cb336d21d8f66947210ffcc9c4d1971b2be3bbbc</ClientComputedContentSHA256><S3ComputedContentSHA256>1e7f2115e60132afed9e61132aa41c3224c6e305ad9f820e6893364d7257ab8d</S3ComputedContentSHA256>
I have tried multiple headers too, like setting the content type to gzip/zip, etc.
Any pointers in the right direction would be appreciated.
I had a good amount of headaches attempting to do a similar thing. I feel your pain.
The following code has worked for us using lambda functions; you can try modifying it and see what happens.
public class AwsApiGateway {
// Things we need to know about the service. Set these values in init()
String host, payloadSha256;
String resource;
String service = 'execute-api';
String region;
public Url endpoint;
String accessKey;
String stage;
string secretKey;
HttpMethod method = HttpMethod.XGET;
// Remember to set "payload" here if you need to specify a body
// payload = Blob.valueOf('some-text-i-want-to-send');
// This method helps prevent leaking secret key,
// as it is never serialized
// Url endpoint;
// HttpMethod method;
Blob payload;
// Not used externally, so we hide these values
Blob signingKey;
DateTime requestTime;
Map<String, String> queryParams = new map<string,string>(), headerParams = new map<string,string>();
void init(){
if (payload == null) payload = Blob.valueOf('');
requestTime = DateTime.now();
createSigningKey(secretKey);
}
public AwsApiGateway(String resource){
this.stage = AWS_LAMBDA_STAGE
this.resource = '/' + stage + '/' + resource;
this.region = AWS_REGION;
this.endpoint = new Url(AWS_ENDPOINT);
this.accessKey = AWS_ACCESS_KEY;
this.secretKey = AWS_SECRET_KEY;
}
// Make sure we can't misspell methods
public enum HttpMethod { XGET, XPUT, XHEAD, XOPTIONS, XDELETE, XPOST }
public void setMethod (HttpMethod method){
this.method = method;
}
public void setPayload (string payload){
this.payload = Blob.valueOf(payload);
}
// Add a header
public void setHeader(String key, String value) {
headerParams.put(key.toLowerCase(), value);
}
// Add a query param
public void setQueryParam(String key, String value) {
queryParams.put(key.toLowerCase(), uriEncode(value));
}
// Create a canonical query string (used during signing)
String createCanonicalQueryString() {
String[] results = new String[0], keys = new List<String>(queryParams.keySet());
keys.sort();
for(String key: keys) {
results.add(key+'='+queryParams.get(key));
}
return String.join(results, '&');
}
// Create the canonical headers (used for signing)
String createCanonicalHeaders(String[] keys) {
keys.addAll(headerParams.keySet());
keys.sort();
String[] results = new String[0];
for(String key: keys) {
results.add(key+':'+headerParams.get(key));
}
return String.join(results, '\n')+'\n';
}
// Create the entire canonical request
String createCanonicalRequest(String[] headerKeys) {
return String.join(
new String[] {
method.name().removeStart('X'), // METHOD
new Url(endPoint, resource).getPath(), // RESOURCE
createCanonicalQueryString(), // CANONICAL QUERY STRING
createCanonicalHeaders(headerKeys), // CANONICAL HEADERS
String.join(headerKeys, ';'), // SIGNED HEADERS
payloadSha256 // SHA256 PAYLOAD
},
'\n'
);
}
// We have to replace ~ and " " correctly, or we'll break AWS on those two characters
string uriEncode(String value) {
return value==null? null: EncodingUtil.urlEncode(value, 'utf-8').replaceAll('%7E','~').replaceAll('\\+','%20');
}
// Create the entire string to sign
String createStringToSign(String[] signedHeaders) {
String result = createCanonicalRequest(signedHeaders);
return String.join(
new String[] {
'AWS4-HMAC-SHA256',
headerParams.get('date'),
String.join(new String[] { requestTime.formatGMT('yyyyMMdd'), region, service, 'aws4_request' },'/'),
EncodingUtil.convertToHex(Crypto.generateDigest('sha256', Blob.valueof(result)))
},
'\n'
);
}
// Create our signing key
void createSigningKey(String secretKey) {
signingKey = Crypto.generateMac('hmacSHA256', Blob.valueOf('aws4_request'),
Crypto.generateMac('hmacSHA256', Blob.valueOf(service),
Crypto.generateMac('hmacSHA256', Blob.valueOf(region),
Crypto.generateMac('hmacSHA256', Blob.valueOf(requestTime.formatGMT('yyyyMMdd')), Blob.valueOf('AWS4'+secretKey))
)
)
);
}
// Create all of the bits and pieces using all utility functions above
public HttpRequest createRequest() {
init();
payloadSha256 = EncodingUtil.convertToHex(Crypto.generateDigest('sha-256', payload));
setHeader('date', requestTime.formatGMT('yyyyMMdd\'T\'HHmmss\'Z\''));
if(host == null) {
host = endpoint.getHost();
}
setHeader('host', host);
HttpRequest request = new HttpRequest();
request.setMethod(method.name().removeStart('X'));
if(payload.size() > 0) {
setHeader('Content-Length', String.valueOf(payload.size()));
request.setBodyAsBlob(payload);
}
String finalEndpoint = new Url(endpoint, resource).toExternalForm(),
queryString = createCanonicalQueryString();
if(queryString != '') {
finalEndpoint += '?'+queryString;
}
request.setEndpoint(finalEndpoint);
for(String key: headerParams.keySet()) {
request.setHeader(key, headerParams.get(key));
}
String[] headerKeys = new String[0];
String stringToSign = createStringToSign(headerKeys);
request.setHeader(
'Authorization',
String.format(
'AWS4-HMAC-SHA256 Credential={0}, SignedHeaders={1},Signature={2}',
new String[] {
String.join(new String[] { accessKey, requestTime.formatGMT('yyyyMMdd'), region, service, 'aws4_request' },'/'),
String.join(headerKeys,';'), EncodingUtil.convertToHex(Crypto.generateMac('hmacSHA256', Blob.valueOf(stringToSign), signingKey))}
));
system.debug(json.serializePretty(request.getEndpoint()));
return request;
}
// Actually perform the request, and throw exception if response code is not valid
public HttpResponse sendRequest(Set<Integer> validCodes) {
HttpResponse response = new Http().send(createRequest());
if(!validCodes.contains(response.getStatusCode())) {
system.debug(json.deserializeUntyped(response.getBody()));
}
return response;
}
// Same as above, but assume that only 200 is valid
// This method exists because most of the time, 200 is what we expect
public HttpResponse sendRequest() {
return sendRequest(new Set<Integer> { 200 });
}
// TEST METHODS
public static string getEndpoint(string attribute){
AwsApiGateway api = new AwsApiGateway(attribute);
return api.createRequest().getEndpoint();
}
public static string getEndpoint(string attribute, map<string, string> params){
AwsApiGateway api = new AwsApiGateway(attribute);
for (string key: params.keySet()){
api.setQueryParam(key, params.get(key));
}
return api.createRequest().getEndpoint();
}
public class EndpointConfig {
string resource;
string attribute;
list<object> items;
map<string,string> params;
public EndpointConfig(string resource, string attribute, list<object> items){
this.items = items;
this.resource = resource;
this.attribute = attribute;
}
public EndpointConfig setQueryParams(map<string,string> parameters){
params = parameters;
return this;
}
public string endpoint(){
if (params == null){
return getEndpoint(resource);
} else return getEndpoint(resource + '/' + attribute, params);
}
public SingleRequestMock mockResponse(){
return new SingleRequestMock(200, 'OK', json.serialize(items), null);
}
}
}
In the Book "Introduction to Vala" by Dr Michael Lauer, he has mentioned that the lib Soup async api is broken. I'm struggling to write a simple example using session.queue_message that query radio stations using the service from radio-browser. Here is my code. I would appreciate any help form experienced Programmers like "Al Thomas". Thank you.
public class Station : Object {
// A globally unique identifier for the change of the station information
public string changeuuid { get; set; default = ""; }
// A globally unique identifier for the station
public string stationuuid { get; set; default = ""; }
// The name of the station
public string name { get; set; default = ""; }
// The stream URL provided by the user
public string url { get; set; default = ""; }
// and so on ... many properties
public string to_string () {
var builder = new StringBuilder ();
builder.append_printf ("\nchangeuuid = %s\n", changeuuid);
builder.append_printf ("stationuuid = %s\n", stationuuid);
builder.append_printf ("name = %s\n", name);
builder.append_printf ("url = %s\n", url);
return (owned) builder.str;
}
}
public class RadioBrowser : Object {
private static Soup.Session session;
// private static MainLoop main_loop;
public const string API_URL = "https://de1.api.radio-browser.info/json/stations";
public const string USER_AGENT = "github.aeldemery.radiolibrary";
public RadioBrowser (string user_agent = USER_AGENT, uint timeout = 50)
requires (timeout > 0)
{
Intl.setlocale ();
session = new Soup.Session ();
session.timeout = timeout;
session.user_agent = user_agent;
session.use_thread_context = true;
// main_loop = new MainLoop ();
}
private void check_response_status (Soup.Message msg) {
if (msg.status_code != 200) {
var str = "Error: Status message error %s.".printf (msg.reason_phrase);
error (str);
}
}
public Gee.ArrayList<Station> listStations () {
var stations = new Gee.ArrayList<Station> ();
var data_list = Datalist<string> ();
data_list.set_data ("limit", "100");
var parser = new Json.Parser ();
parser.array_element.connect ((pars, array, index) => {
var station = Json.gobject_deserialize (typeof (Station), array.get_element (index)) as Station;
assert_nonnull (station);
stations.add (station);
});
var msg = Soup.Form.request_new_from_datalist (
"POST",
API_URL,
data_list
);
// send_message works but not queue_message
// session.send_message (msg);
session.queue_message (msg, (sess, mess) => {
check_response_status (msg);
try {
parser.load_from_data ((string) msg.response_body.flatten ().data);
} catch (Error e) {
error ("Failed to parse data, error:" + e.message);
}
});
return stations;
}
}
int main (string[] args) {
var radio_browser = new RadioBrowser ();
var stations = radio_browser.listStations ();
assert_nonnull (stations);
foreach (var station in stations) {
print (station.to_string ());
}
return 0;
}
While I'm not Al Thomas, I still might be able to help. ;)
For async calls to work in there needs to be a main loop running, and typically from the main program thread. Thus you want to create and execute the main loop from your main() function, rather than in your application code:
int main (string[] args) {
var loop = new GLib.MainLoop ();
var radio_browser = new RadioBrowser ();
// set up async calls here
// then set the main loop running as the last thing
loop.run();
}
Also, if you want to wait for an async call to complete, you typically need to make the call using the yield keyword from another async function E.g:
public async Gee.ArrayList<Station> listStations () {
…
// When this call is made, execution of listStations() will be
// suspended until the soup response is received
yield session.send_async(msg);
// Execution then resumes normally
check_response_status (msg);
parser.load_from_data ((string) msg.response_body.flatten ().data);
…
return stations;
}
You can then call this from the (non-async) main function using the listStations.begin(…) notation:
int main (string[] args) {
var loop = new GLib.MainLoop ();
var radio_browser = new RadioBrowser ();
radio_browser.listStations.begin((obj, res) => {
var stations = radio_browser.listStations.end(res);
…
loop.quit();
});
loop.run();
}
As further reading, I would recommend the async section of the Vala Tutorial, and the asyc examples on the wiki as well.
I have following error in this code: Cannot infer type arguments for ReadOnlyListWrapper<>
How should my return type look like? I need to save arraylist for each node in all columns. But I can not return it.
for (Entry<String, String> ent : dc.getSortedOrgAll().entrySet()) {
TreeTableColumn<String, ArrayList<String>> col = new TreeTableColumn<>(
ent.getValue());
col.setCellValueFactory(new Callback<TreeTableColumn.CellDataFeatures<String, ArrayList<String>>, ObservableValue<ArrayList<String>>>() {
#Override
public ObservableValue<ArrayList<String>> call(
CellDataFeatures<String, ArrayList<String>> param) {
TreeMap<String, List<String>> temp = (TreeMap<String, List<String>>) dc
.getFuncTypeOrg().clone();
ArrayList<String> result = new ArrayList<>();
for (int i = 0; i < temp.size(); i++) {
List<String> list = temp.firstEntry().getValue();
String key = temp.firstEntry().getKey();
// root.getChildren();
if (list.get(1).equals("Papier")) {
System.out.println(list.get(1));
}
if (list.get(1).equals(param.getValue().getValue())
&& list.get(5).equals(col.getText())) {
result.add(list.get(2));
if(list.size()==9)
result.add(list.get(list.size()-1));
else result.add("White");
} else {
temp.remove(key);
// result = null;
}
}
return new ReadOnlyListWrapper<>(result);
}
});
ReadOnlyListWrapper<T> implements ObservableValue<ObservableList<T>>, which isn't what you need, as you declared the callback to return an ObservableValue<ArrayList<T>> (T is just String here).
So I think you just need
return new ReadOnlyObjectWrapper<ArrayList<String>>(result);
and you can probably omit the generic type:
return new ReadOnlyObjectWrapper<>(result);
Just a comment: I think you could make your life much easier by defining some actual data model classes, instead of trying to force your data into various collections implementations.
I am using netty 3.5.11 with Jdk 1.7 on Ubuntu to receive a large amount of updates of stocks rates at a very high frequency. The message format being sent is JSON. The data is subscribed from topic on a redis server. There is a Subscriber for each symbol. The channel object is passed to multiple Subscribers and on receiving the data it is written to the client.
Now the amount of data received is around 25,000 records in 2 minutes. Each record size is on an average around 500 bytes long.
During test runs around 7500/8000 records were dropped because the channel was not writable.
How do i avoid this. ?
I also noticed that the latency increases systematically leading to updates being received after a long period. This happened when i I used Bufferedwritehandler to avoid packet drops.
Here are the options that i set on bootstrap.
executionHandler = new ExecutionHandler(
new OrderedMemoryAwareThreadPoolExecutor(Runtime.getRuntime().availableProcessors() * 2, 1000000, 10000000, 100,
TimeUnit.MILLISECONDS));
serverBootStrap.setPipelineFactory(new ChannelPipelineFactory()
{
#Override
public ChannelPipeline getPipeline() throws Exception
{
return Channels.pipeline(new PortUnificationServerHandler(getConfiguration(), executionHandler));
}
});
serverBootStrap.setOption("child.tcpNoDelay", true);
serverBootStrap.setOption("tcpNoDelay", true);
serverBootStrap.setOption("child.keepAlive", true);
serverBootStrap.setOption("child.reuseAddress", true);
//setting buffer size can improve I/O
serverBootStrap.setOption("child.sendBufferSize", 16777216);
serverBootStrap.setOption("receiveBufferSize", 16777216);//1048576);
// better to have an receive buffer predictor
serverBootStrap.setOption("receiveBufferSizePredictorFactory", new AdaptiveReceiveBufferSizePredictorFactory(1024, 1024 * 16, 16777216));//1048576));
//if the server is sending 1000 messages per sec, optimum write buffer water marks will
//prevent unnecessary throttling, Check NioSocketChannelConfig doc
serverBootStrap.setOption("backlog", 1000);
serverBootStrap.setOption("sendBufferSize", 16777216);//1048576);
serverBootStrap.setOption("writeBufferLowWaterMark", 1024 * 1024 * 25);
serverBootStrap.setOption("writeBufferHighWaterMark", 1024 * 1024 * 50);
The pipeline and handlers class
public class PortUnificationServerHandler extends FrameDecoder
{
private AppConfiguration appConfiguration;
private final ExecutionHandler executionHandler;
public PortUnificationServerHandler(AppConfiguration pAppConfiguration, ExecutionHandler pExecutionHandler)
{
appConfiguration = pAppConfiguration;
this.executionHandler = pExecutionHandler;
}
#Override
protected Object decode(ChannelHandlerContext ctx, Channel channel, ChannelBuffer buffer) throws Exception
{
String lRequest = buffer.toString(CharsetUtil.UTF_8);
if (ConnectionServiceHelper.isValidJSON(lRequest))
{
ObjectMapper lObjectMapper = new ObjectMapper();
StringReader lStringReader = new StringReader(lRequest);
JsonNode lNode = lObjectMapper.readTree(lStringReader);
if (lNode.get(Constants.REQUEST_TYPE).asText().trim().equalsIgnoreCase(Constants.LOGIN_REQUEST))
{
JsonNode lDataNode1 = lNode.get(Constants.REQUEST_DATA);
LoginRequest lLogin = lObjectMapper.treeToValue(lDataNode1, LoginRequest.class);
if (lLogin.getCompress() != null)
{
if (lLogin.getCompress().trim().equalsIgnoreCase(Constants.COMPRESS_FLAG_TRUE))
{
enableJSON(ctx);
enableGzip(ctx);
ctx.getPipeline().remove(this);
}
else
{
enableJSON(ctx);
ctx.getPipeline().remove(this);
}
}
else
{
enableJSON(ctx);
ctx.getPipeline().remove(this);
}
}
}
// Forward the current read buffer as is to the new handlers.
return buffer.readBytes(buffer.readableBytes());
}
private void enableJSON(ChannelHandlerContext ctx)
{
ChannelPipeline pipeline = ctx.getPipeline();
boolean lHandlerExists = pipeline.getContext("bufferedwriter") != null;
if (!lHandlerExists)
{
pipeline.addFirst("bufferedwriter", new MyBufferedWriteHandler()); // 80960
}
boolean lHandlerExists = pipeline.getContext("framer") != null;
if (!lHandlerExists)
{
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(65535,
new ChannelBuffer[]
{
ChannelBuffers.wrappedBuffer(
new byte[]
{
'\n'
})
}));
}
lHandlerExists = pipeline.getContext("decoder") != null;
if (!lHandlerExists)
{
pipeline.addLast("decoder", new StringDecoder(CharsetUtil.UTF_8));
}
lHandlerExists = pipeline.getContext("encoder") != null;
if (!lHandlerExists)
{
pipeline.addLast("encoder", new StringEncoder(CharsetUtil.UTF_8));
}
lHandlerExists = pipeline.getContext("executor") != null;
if (!lHandlerExists)
{
pipeline.addLast("executor", executionHandler);
}
lHandlerExists = pipeline.getContext("handler") != null;
if (!lHandlerExists)
{
pipeline.addLast("handler", new ConnectionServiceUpStreamHandler(appConfiguration));
}
lHandlerExists = pipeline.getContext("unite") != null;
if (!lHandlerExists)
{
pipeline.addLast("unite", new PortUnificationServerHandler(appConfiguration, executionHandler));
}
}
private void enableGzip(ChannelHandlerContext ctx)
{
ChannelPipeline pipeline = ctx.getPipeline();
//pipeline.remove("decoder");
//pipeline.addLast("decoder", new MyStringDecoder(CharsetUtil.UTF_8, true));
//pipeline.addLast("compress", new CompressionHandler(80, "gzipdeflater"));
boolean lHandlerExists = pipeline.getContext("encoder") != null;
if (lHandlerExists)
{
pipeline.remove("encoder");
}
lHandlerExists = pipeline.getContext("gzipdeflater") != null;
if (!lHandlerExists)
{
pipeline.addBefore("executor", "gzipdeflater", new ZlibEncoder(ZlibWrapper.GZIP));
}
lHandlerExists = pipeline.getContext("lengthprepender") != null;
if (!lHandlerExists)
{
pipeline.addAfter("gzipdeflater", "lengthprepender", new LengthFieldPrepender(4));
}
}
}
The BufferedWriterHandler
public class MyBufferedWriteHandler extends BufferedWriteHandler
{
private final AtomicLong bufferSize = new AtomicLong();
final Logger logger = LoggerFactory.getLogger(getClass());
public MyBufferedWriteHandler() {
// Enable consolidation by default.
super(true);
}
#Override
public void writeRequested(ChannelHandlerContext ctx, MessageEvent e) throws Exception
{
ChannelBuffer data = (ChannelBuffer) e.getMessage();
if (e.getChannel().isWritable())
{
long newBufferSize = bufferSize.get();
// Flush the queue if it gets larger than 8KiB.
if (newBufferSize > 0)
{
flush();
bufferSize.set(0);
}
ctx.sendDownstream(e);
}
else
{
logger.warn( "Buffering data for : " + e.getChannel().getRemoteAddress() );
super.writeRequested(ctx, e);
bufferSize.addAndGet(data.readableBytes());
}
}
#Override
public void channelInterestChanged(ChannelHandlerContext ctx, ChannelStateEvent e) throws Exception
{
if (e.getChannel().isWritable())
{
flush();
}
}
The function used in the Subscriber class to write data
public void writeToClient(Channel pClientChannel, String pMessage) throws IOException
{
String lMessage = pMessage;
if (pClientChannel.isWritable())
{
lMessage += Constants.RESPONSE_DELIMITER;
pClientChannel.write(lMessage);
}
else
{
logger.warn(DroppedCounter++ + " droppped : " + pMessage);
}
}
I have implemented some of the suggestions that i read on stackoverflow and other sites. But i have not been successfull in resolving this issue.
Kindly suggest or advice as to what am i missing ?
Thanks