Set Flash in Symfony 2.1 - symfony

I have been adapting our code in preparation of moving our code to the new 2.1 Symfony codebase.
In 2.0.* we could set Flash messages by simply calling the session service in our controller using the following
$this->get('session')->setFlash('type', 'message');
I have trawled through the new documentation, I was just wondering if there was a clean way, similar to the above; rather than just calling the FlashBagInterface?

Try:
$this->get('session')->getFlashBag()->set('type', 'message');

Also, you might want to try the add() method instead, which won't obliterate other flash messages:
$this->get('session')->getFlashBag()->add('type', 'message');

FYI:
The Symfony HttpFoundation component has a very powerful and flexible session subsystem which is designed to provide session management through a simple object-oriented interface using a variety of session storage drivers.
FlashBagInterface has a simple API:
set(): Sets an attribute by key;
get(): Gets an attribute by key;
all(): Gets all attributes as an array of key => value;
has(): Returns true if the attribute exists;
replace(): Sets multiple attributes at once: takes a keyed array and sets each key => value pair;
remove(): Deletes an attribute by key;
clear(): Clear all attributes.
Source: Symfony2: Session Management

Related

How to reuse KafkaListenerContainerFactory with a custom message converter for batch listeners and non-batch/record listeners?

The spring-kafka documentation mentions:
Starting with version 2.8, you can override the factory’s batchListener propery using the batch property on the #KafkaListener annotation. This, together with the changes to Container Error Handlers allows the same factory to be used for both record and batch listeners.
I want to use it like this. So reuse the same factory for record and batch listeners. The factory is provided by an internal library that is used by multiple services.
However, I also need to define a custom MessageConverter.
But I found out that for batch listeners I need to wrap my message converter in BatchMessagingMessageConverter otherwise the message converter will not be used correctly and the wrong type will be supplied to my batch listener.
So: Is there a simple way to reuse KafkaListenerContainerFactory with a custom messageConverter for batch listeners and non-batch/record listeners?
My current workaround looks like this, but I do not like it as it depends on how spring-kafka internally sets up its configuration, so it might break in future updates:
factory.setContainerCustomizer(container -> {
var messageListener = container.getContainerProperties().getMessageListener();
if (messageListener instanceof FilteringBatchMessageListenerAdapter) {
var meessageListenerDelegate =
((FilteringBatchMessageListenerAdapter<?, ?>) messageListener).getDelegate();
if (meessageListenerDelegate instanceof BatchMessagingMessageListenerAdapter) {
((BatchMessagingMessageListenerAdapter<?, ?>) meessageListenerDelegate).setBatchMessageConverter(
new BatchMessagingMessageConverter(messageConverter));
}
}
});
Another option is to create a separate factory for batch listeners. With this, I am afraid that someone might use #KafkaListener(batch="true") without supplying the correct library, which only works partly.
Currently, I am using version 2.8.9 of spring-kafka.
It is not currently possible; please open a new feature suggestion on GitHub to allow provisioning both types of converter on the factory.

Kotlin: Json: conflicting annotation schemes for name transcription

I read data from Firebase database into a Kotlin/Android program. The key names in Firebase are different from those of the corresponding Kotlin variables. I test this code with flat JSON files (for good reasons) where I retain the same key names used in Firebase, so I need to transcribe them too.
Firebase wants to annotate variables with #PropertyName; but Gson, which I use to read flat files, wants #SerializedName (which Firebase doesn't understand, unfortunately.)
Through trial and error I found that this happens to work:
#SerializedName("seq")
var id: Int? = null
#PropertyName("seq")
get
#PropertyName("seq")
set
Both Firebase and Gson do their thing and my class gets its data. Am I hanging by a thin thread here? Is there a better way to do this?
Thank you!,
You can probably solve this by using Kotlin's #JvmField to suppress generation of getters and setters. This should allow you to place the #PropertyName annotation directly on the property. You can then implement a Gson FieldNamingStrategy which checks if a #PropertyName annotation is present on the field and in that case uses its value; otherwise it could return the field name. The FieldNamingStrategy has to be set on a GsonBuilder which you then use to create the Gson instance.

Realm call setter

I'm using realm as database and kotlin as language.
I implemented my custom setter method for a property.
Does Realm call this setter somehow?
For example:
open class Human(): RealmObject()
{
open var Name: String = ""
set(value)
{
setName(value)
}
}
Now I also have a property changeDate and it would be nice if I can set the changeDate automatically in the setNameto new actual day.
But I can't do this if Realm calls this method also.
Thanks
I've tried this with Kotlin 1.1.1 and Realm 3.0.0, and it doesn't call the custom setter, it assigns the value in some other way (which means that it even works if your custom setter is empty, which is a bit unexpected).
Edit: Looked at the generated code and the debugger.
When you're using an object that's connected to Realm, it's an instance of a proxy class that's a subclass of the class that you're using in your code. When you're reading properties of this instance, the call to the getter goes down to native calls to access the stored value that's on disk, inside Realm.
Similarly, calling the setter eventually gets to native calls to set the appropriate values. This explains why the setter doesn't get called: Realm doesn't need to call the setter, because it doesn't load the values into memory eagerly, the proxy is just pointing into the real data in Realm, and whenever you read that value, it will read it from there.
As for how this relates to Kotlin code, the calls to the proxy's setter and getter that access the data inside Realm happen whenever you use the field keyword (for the most part).
var Name: String = ""
get() {
return field // this calls `String realmGet$Name()` on the proxy
}
set(value) {
field = value // this calls `void realmSet$Name(String value)` on the proxy
}

How does Hadoop decides which mapper to run in MapTask class, OldMapper or NewMapper?

I cannot understand the difference between runOldMapper(...) and runNewMapper(...) methods in MapTask class. Hadoop decides based on "useNewApi" parameter from JobConf; but where and when in the framework this parameter has been set? I think the default value is FALSE for all jobs. We can set the value to TRUE by calling JobConf.setUseNewMapper(boolean flag) which sets "mapred.mapper.new-api", but when and why we should decide to set this parameter?
You're correct in the assumption that this behaviour is triggered by the mapred.mapper.new-api configuration.
Depending on whether your using the new or old job conf/client, look in the source for:
org.apache.hadoop.mapreduce.Job.submit() method, which calls the setUseNewAPI() private method. This configures the new-api properties depending on whether the old mapper / reducer class properties are set or not
org.apache.hadoop.mapred.JobConf - As you note in your question, you the developer will need to call the setUseNewMapper(true) method if you are using a new API mapper implementation (false by default and your mapper class implements the mapred.Mapper interface, or true if your mapper extends the mapreduce.Mapper class)

The ObjectContext instance has been disposed and can no longer be used for operations that require a connection

I'm trying to query UserMetaData for a single record using the following query
using (JonTestDataEntities context = new JonTestDataEntities())
{
return context.UserMetaData.Single(user => user.User.ID == id);
}
The error I'm receiving is: The ObjectContext instance has been disposed and can no longer be used for operations that require a connection. It is trying to lazyload Group for UserMetaData record. How can I change my query to prevent this error?
As the message says, you cannot lazily load it after the function returns, because you've already disposed the context. If you want to be able to access Group, you can make sure you fetch it earlier. The extension method .Include(entity => entity.NavigationProperty) is how you can express this:
using (JonTestDataEntities context = new JonTestDataEntities())
{
return context.UserMetaData.Include(user => user.Group).Single(user => user.User.ID == id);
}
Also consider adding .AsNoTracking(), since your context will be gone anyway.
You need to create a strong type that matches the signature of your result set. Entity Framework is creating an anonymous type and the anonymous type is disposed after the using statement goes out of scope.
So assigning to a strong type avoids the issue altogether. I'd recommend creating a class called UserDTO since you're really creating a data transfer object in this case. The benefit of the dedicated DTO is you can include only the necessary properties to keep your response as lightweight as possible.

Resources