AwesomeWM - How to prevent migration of clients when screen disconnected? - awesome-wm

After docking and undocking the laptop, as a result of two screens disappearing and one screen appearing or vice-a-versa, all windows are migrated to one screen.
How to accomplish the following desired behavior: keep windows associated with disconnected screens, with the tags the had on those screens, letting these windows be invisible (that's ok), until I explicitly choose to migrate a specific window to the current screen (via a lua command / script of some sort that lets me browse the list of windows). Also, when the screen configuration changes back (e.g. upon re-docking), all windows should become accessible, as if no screen changes ever happened. The use case is that, while undocked, I don't need to access all windows.
I looked at no_offscreen, but it didn't seem to be related. Not really sure where to begin.

You need to implement a request::screen handler on the tag and move the tags to the remaining screen. Then optionally add a taglist filter to hide them. Once the screen is back, move the tags back to the original screen
See https://www.reddit.com/r/awesomewm/comments/5r9mgu/client_layout_not_preserved_when_switching/ for a close enough example.
Another way would be to stop using "real" screens and use "fake" ones. This way you can ignore the fact that they are disconnected and keep it as if it was still there. This require some more mechanic to prevent the "real" screen from overlapping a the fake one (a recipe for disaster)

Taking Emmanuel's answer as the guide, here's what seems to work for me.
My screen.outputs where nil, so I created an ID from resolution:
function firstkey(t) -- sorry, not a Lua programmer...
for i, e in pairs(t) do
return i
end
return nil
end
local function get_screen_id(s)
return tostring(s.geometry.width) .. "x" .. tostring(s.geometry.height) .. "x" .. tostring(firstkey(s.outputs))
end
In awful.screen.connect_for_each_screen(function(s):
-- Check if existing tags belong to this new screen that's being added
local restored = false;
local all_tags = root.tags()
for i, t in pairs(all_tags) do
if get_screen_id(s) == t.screen_id then
t.screen = s
restored = true;
end
end
-- On restored screen, select a tag
-- If this screen is entirely brand new, then create tags for it
if restored then
local first_tag = nil;
for i, t in pairs(s.tags) do -- not sure how else to get first elem
first_tag = t
break
end
first_tag.selected = true
else
-- Each screen has its own tag table.
awful.tag({ "1", "2", "3", "4", "5", "6", "7", "8", "9" }, s, awful.layout.layoutThens[1])
-- Assign the tag to this screen, to restore as the screen disconnects/connects
for k, v in pairs(s.tags) do
v.screen_id = get_screen_id(s)
end
end
And handle the signal when screen disappears:
tag.connect_signal("request::screen", function(t)
-- Screen has disconnected, re-assign orphan tags to a live screen
-- Find a live screen
local live_screen = nil;
for s in screen do
if s ~= t.screen then
live_screen = s;
break
end
end
-- Move the orphaned tag to the live screen
t.screen = live_screen
end)

Related

Inconsistency of EVR and sink writer flipping screen captured images vertically

Using Media Foundation, I'm trying to capture the screen and record and render it with EVR at the same time. I've written a custom media source for this. It works almost perfectly.
The only problem is, by default without reversing the image, EVR renders the screen correctly, but the recorded video is vertically flipped. I'm literally passing the same IMFSample to the sinks. I reversed the image and it records just fine, but now EVR renders it vertically flipped. Why are EVR and sink writer inconsistent and is there a solution for this problem?
Other possibly useful information:
Capture method: Buffer swapping with DirectX9
Media type:
check = MFExtern.MFCreateMediaType(out var mediaType);
check = mediaType.SetGUID(MFAttributesClsid.MF_MT_MAJOR_TYPE, MFMediaType.Video);
check = mediaType.SetGUID(MFAttributesClsid.MF_MT_SUBTYPE, MFMediaType.RGB32);
check = mediaType.SetUINT32(MFAttributesClsid.MF_MT_INTERLACE_MODE, (uint)MFVideoInterlaceMode.Progressive);
check = mediaType.SetSize(MFAttributesClsid.MF_MT_FRAME_SIZE, (uint)mode.Width, (uint)mode.Height);
check = mediaType.SetRatio(MFAttributesClsid.MF_MT_FRAME_RATE, FrameRateNumerator, FrameRateDenominator);
check = mediaType.SetRatio(MFAttributesClsid.MF_MT_PIXEL_ASPECT_RATIO, 1, 1);
check = mediaType.SetBoolean(MFAttributesClsid.MF_MT_ALL_SAMPLES_INDEPENDENT, true);

How to implement JS infine scroll

I would like to display a table which have several thousand rows with complex formatting (color, font, border, etc. done on the ASP.Net Core server).
Initially, I generated an html copy of all the data (stored in a SQL Server database), but realised it wasn't optimal since the generated html data accounted for more than 50MB.
No, I only generate about 200 rows; 100 visible and 50 hidden above and below (cache). I would like to freely scroll the tablen, but when there are only 25 hidden rows above or below, fetch new rows from the controller which are then prepend or append to the table. Basically, I want to give enough room so I can can populate the table when I'm scrolling through the "hidden" (cache) rows.
Everything seems to work well, but I believe I need to use a web-worker to run the function in a background thread which add new rows to the table independently of the table being scrolled.
Below is a excerpt of the code :
I use a debunce function to only catch the lastest position of the mouse scroll.
The scroll function basically only checks whether there are enough hidden rows (cache) above or below the table. If it reaches the threshold, it either prepends (scroll upwards) or appends (scroll downwards) rows obtained from the controller.
The main issue is that I can't scroll the table when the new rows are being fetch as the page freezes. It only takes about 1 to 2 seconds to populate to new (scrollable) rows but it isn't smooth.
Could anyone help me improve the code? (general ideas) I also read that there are already existing libraries but can't really get my head around them..
$('#fields-table > tbody').on('wheel', _.debounce(async function (event) {
await scroll(); // Probably change it to a web-worker or promise?
}
async function scroll() {
var threshold = 200; // Corresponds to approximatively 50 rows (above and below).
var above = $('#fields-table').scrollTop();
var below = $('#fields-table > tbody').height() - $('#fields-table').height() - above;
// Gets the scroll delta based on the table heights.
var delta = 0
if (above < threshold) delta = above - threshold; // Scrolls upwards.
if (below < threshold) delta = threshold - below; // Scrolls downwards.
await addCacheRows(delta); // Prepends (delta < 0) or appends (delta > 0) or appends rows obtained via the fetch API.
}
Your problem is unlikely to resolve with a web worker. Without seeing more code I cannot tell for sure, but I suspect your code to generate new rows is not sufficiently efficient. Remember:
Use DocumentFragment to create the HTML, do not immediately append it to the main DOM tree row by row. Appending elements to a document triggers some recalculations.
Unless this is a LOT of data or requires lots of work serverside, you can immediately start preloading next/previous rows. Keep the promise object and only await it once you need them, that's the simplest way to go around it
Use passive scroll event listener - Firefox even shows a console warning whenever you do not do that
There is no way generating 200 rows of table data should take seconds. Since you use JQuery anyway (really, in 2022?), note that there are plugins for this. I don't remember what I used, but it worked perfect and scrolled smooth with much more data than what you have.
Thank you for your help. I realise it won't be as straightforward as I initally thought (I made some tests with WPF virtualization as well).
Regarding the time it takes to generate the extra rows, I believe it mostly comes from the server. Sure, I can probably load new rows independently of the threshold.
I've never heard about DocumentFragment, but that something I should definitely consider.

Awesome-WM: Spawn client on same tag as parent

My goal is to let clients that have a parent spawn on the same tag as their parents. Clients w/o parents should spawn on the active tag (as usual).
My first approach is to connect a signal using client.connect_signal("manage", ...). However, I couldn't find a way to get the parent of a client or to check if it has a parent.
Thank you for taking a look at my problem!
Update 1: I found client:get_transient_for_matching (matcher), but the documentation is not very helpful.
Update 2: Thanks to Uli for the hint to use client::transient_for as an easier way to get the transient. Using
client.connect_signal("manage", function (c)
parent = c.transient_for
naughty.notify({ preset = naughty.config.presets.critical,
title = "Debug",
text = tostring(c.window) .. " " .. (parent and tostring(parent.window) or "") })
if parent then
-- move new client to same tag and screen as parent
tag = parent.first_tag
screen = parent.screen
c:move_to_tag(tag)
c:move_to_screen(screen)
end
end)
I tried to achieve my goals and added a simple debug output using notifications. Now, only a very few new clients actually have a transient that is not nil. E.g., spawning git gui from a terminal does not have a transient. However, I strongly believe it should (or I misunderstood what a transient is).
Ubuntu 20LTS, Awesome WM version 4.3-4, awesome-extra 2019021001
You are looking for c.transient_for. This contains the client object for the "parent" window or nil.

Awesomewm update watch widget on keypress

I am a new user to awesomewm (but have used other WMs before: i3, bspwm, xmonad, etc). I like to have some shell scripts that I have written in my wibar (I think that's what its called, the bar at the top of the screen with the taglist) to display stuff like battery, audio, etc (as I know is common). Currently I am using the "wibar.widget.watch" to do this, as shown below.
-- Right widgets
layout = wibox.layout.fixed.horizontal,
awful.widget.watch('musicbar', 5),
wibox.widget.textbox(' | '),
awful.widget.watch('wifibar', 5),
wibox.widget.textbox(' | '),
awful.widget.watch('audiobar', 0.5),
wibox.widget.textbox(' | '),
awful.widget.watch('batbar', 5),
In the code above stuff like "audiobar" are scripts that return information as standard output. It all works perfectly, even displays the emojis well :). I have one problem (maybe just an optimization).
Currently I have the audiobar running twice a second, this is because that is the only one which directly changes based on my input (changing volume) and so I want it to change immediately (obviously this still has a <= 0.5 second delay, which is annoying). This means that most of the time it is updating twice a second unnecesarily.
So, I'm wondering if there is a way to have it update when I change the volume, which I've bound to the XF86 audio keys in rc.lua, instead of changing based on a timer. Based on my reading of the documentation, there is no way to do this with the watch widget, but as I said I am new to awesome.
Below is how I bound the keys (shouldn't make a difference, but I imagine that this is where the change would be made).
awful.key(
{},
"XF86AudioLowerVolume",
function()
awful.spawn("amixer -q set Master 5%-")
end,
{description = "lower volume", group = "control"}
),
I know that I can use some of the pre-made widgets on github to display volume and stuff but I like the shell scripts because they let me easily move between WMs and are simpler than those widgets (which I like because it means that I can more easily address problems with them and make them display exactly what I want, as well as learn).
edit: I am willing to learn to do this with lua, I just first want to see if I can easily do it with shell scripts.
You need to keep a reference to the timer around that awful.widget.watch creates internally. To do this, you need to do something like this in global context (i.e. outside of the definitions of the widgets or keybindings):
local musicbar_widget, musicbar_timer = awful.widget.watch('musicbar', 5)
You now add musicbar_widget to your wibox (instead of calling awful.widget.watch there). Now, you can "force-update" the widget via musicbar_timer:emit_signal("timeout"). This "pretends" to the widget that the timeout happened again.
In your keybinding (yes, I am mixing up your widgets here, most likely musicbar has nothing to do with the volume):
awful.key(
{},
"XF86AudioLowerVolume",
function()
awful.spawn("amixer -q set Master 5%-")
musicbar_timer:emit_signal("timeout")
end,
{description = "lower volume", group = "control"}
),
Note that this might or might not work. awful.spawn only starts the command, but does not wait for it to finish. So now you are changing the volume at the same time that your are querying it. If querying finishes faster than changing the volume, then the update does not actually occur.
To only update the widget after changing the volume is done, do something like the following:
awful.spawn.with_line_callback(
"amixer -q set Master 5%-", {
exit = function()
musicbar_timer:emit_signal("timeout")
end
})
I ran into a similar problem with streetturtle's volumearc widget. By default, it runs an update command:
Everytime the volume is modified through the widget.
Every second to catch any external volume changes (like with key bindings defined in rc.lua).
watch(get_volume_cmd, 1, update_graphic, volumearc)
Of course this has the following disadvantages:
A slight delay in the volume update (one second in still very perceptible).
On my machine, a constant 1% CPU load just for this task. A trifle I know, but little things add up.
Using a returned update function
A possible solution is to return an update function, available in the context of rc.lua and call this update along the volume modifications.
In volumearc.lua, in the worker function, we put the widget update into a dedicated function:
local ext_update = function()
spawn.easy_async(get_volume_cmd,
function(stdout, stderr, exitreason, exitcode)
update_graphic(volumearc, stdout, stderr, exitreason, exitcode) end)
end
And we return it both from the worker function:
return volumearc, ext_update
and from the widget itself:
volumearc, ext_update = { __call = function(_, ...) return worker(...) end }
return setmetatable(widget, volumearc), ext_update
Now we can use it in rc.lua:
local volumearc_widget = require("widgets.volume-widget.volumearc")
-- ...
local GET_VOLUME = "amixer sget Master"
local INC_VOLUME = "amixer sset Master 3%+"
local DEC_VOLUME = "amixer sset Master 3%-"
local TOG_VOLUME = "amixer sset Master toggle"
myvolume, volume_update = volumearc_widget({get_volume_cmd=GET_VOLUME,
inc_volume_cmd=INC_VOLUME,
dec_volume_cmd=DEC_VOLUME,
tog_volume_cmd=TOG_VOLUME})
-- `myvolume` is the widget that can be added as usual in the wibox.
-- `volume_update` is the function to call in order to trigger an update.
-- ...
-- In the key bindings:
awful.key({}, "XF86AudioMute",
function () awful.spawn(TOG_VOLUME) volume_update() end,
{description="mute", group = "media"}),
awful.key({}, "XF86AudioRaiseVolume",
function () awful.spawn(INC_VOLUME) volume_update() end,
{description="raise volume", group = "media"}),
awful.key({}, "XF86AudioLowerVolume",
function () awful.spawn(DEC_VOLUME) volume_update() end,
{description="lower volume", group="media"}),
Now the volume will be updated both when changed from the widget or from the media keys, instantly and without the need for polling.
Caveat
This does not catch changes made through other interfaces, such as alsamixer for example. To catch these changes, you might want to run the watch function with a very low time frequency (say once per minute).
The code given here is specific to streetturtle's widget, but the concept of returning an internal update function applies for any widget.

XPages datagrid not exiting from edit mode

I use dojo.grid.datagrid
If the DataGrid contains lots of rows(example 200) and to scroll, it does not work exit from edit mode cell. Do know what the problem is?
upd: Or maybe someone knows how to use dgrid / gridx in xPages, becouse i found next big bug - encoding after save rest service :(
I just did some testing and I believe I'm seeing the same thing. It seems to be fine to edit and save as I move through the grid. I can scroll down as needed and save changes. However, when I scroll back up and put a cell in edit mode, it doesn't save the changes -- it immediately reverts to the original value. And sometimes it just leaves the cell in edit mode.
I would agree that it seems to be an issue with memory management. If I set the rowsPerPage to a number that will keep all rows in memory, it appears (with very limited testing) that I can scroll up and down and make changes and they're all saved.
I don't have a solution at the moment, but what I would suggest in lieu of a perfect solution is to find a way to set rowsPerPage to a number greater than the amount of rows that will be displayed in the grid. If there's too much data for that to be feasible, then the approach I would take is to provide filtering on the grid to keep the maximum number of rows displayed much lower and then it won't be as much of a performance hit to set the rowsPerPage to a sufficient amount.
If I come across a better solution, I'll come back and post it here.
Yeah! I solved it! :) Error in FileStore.js(extlib). Add this on onClientLoad. I change "!!error code"
restViewItemFileService._processResponse = function(requestObject, data) {
this._items.splice(0, this._items.length); // !!error code -> this.close();
this._start = requestObject.start;
//TODO: clear identity?
dojo.forEach(data.items, function(entry, idx) {
var item = {storeRef:this, attributes:entry};
var id = item.attributes[this._identity]
var pending = this._pendings[id]
if(pending) {
for (var s in pending.modAttrs[s]) {
this.item.attributes[s] = pending.modAttrs[s]
}
}
this._byIdentity[id] = item;
this._items.push(item);
}, this);
this._topLevelEntries = data['#toplevelentries'];
this.onData(requestObject, data);
this._finishResponse(requestObject);
}

Resources