

The user is fixed at the centre and the input focused window is positioned at 12 o clock, with other sibling windows rotated to face the user (“billboarding”), scaled down and positioned around the layer geometry.Įach layer can be arranged at either a fixed distance for heads-up display components and infinite geometry like skyboxes or they can be swapped back and forth – with optional opacity fade-offs, level of detail triggers and so on based on layer distance. “Windows”, or rather, models – since you have a number of different shapes and mappings to chose from – are grouped in cylindrical layers. Actual input is keyboard and mouse in this round, though there are ongoing experiments with building a suitable glove. The window management scheme is a riff on the tiling window manager, mostly due to the current lack of more sophisticated input devices that would benefit from something more refined. Although the video is presented as monoscopic, the contents itself is stereoscopic – 3D encoded video looks 3D when viewed on a HMD. In the video, you see parts of the first window management scheme that I am trying out. Starting with a shorter but slightly more crowded video from the same desktop session: Some of the details of the TUI subproject and the SHMIF IPC subsystem also fit into the bigger puzzle, as their implicit effect is to push for thinking of clients as interactive, streams of different synchronised data types – rather than opaque pixmaps produced in monolithic ‘GUI toolkit thiefdoms’. The biggest hurdle has been hardware quality and availability, an area where we are finally starting to reach a workable level – the Vive/PSVR- level of hardware is definitely “good enough” to start experimenting – yet terrible enough to not be taken seriously.Īrcan as a whole is in a prime position to do this “right”, in part because of its possible role as both a display server, and as a streaming multimedia processor / aggregator. Instead, the reason was to find things that was not exactly “sound” business – hence why this has all been kept as a non-profit, self-financed thing.ĭuring the last few months, I’ve started unlocking engine parts that were intended for building VR environments, and many of the design choices had this idea as part of the implicit requirements specification very early on. The reason was not to question or challenge their technical or business related soundness as such those merits are already fact. The overall ambition is that it should be ‘patching- scripts’ levels of difficult to piece together or tune a complete desktop environment to fit your individual fancy, and that major parts of such scripts could – with little to no modification – be shared or reused across projects.įor this reason, I’ve deliberately tried to avoid repeating- or imitating- the designs that have been used by projects such as Windows, Android, Xorg, OS X and so on. One of the absolute main goals with the Arcan project as a whole is to explore different models for all the technical aspects that goes into how we interact with computers, and to provide the infrastructure for reducing the barrier to entry for such explorations. To help navigate the post, here are a few links to the individual sections: The explanation of what is going on can be found in the ‘High Level Use’ section further below. For the impatient, here is a video of it being used ( Youtube link): It is tentatively called safespaces (Github link) as an ironic remark on the ‘anything but safe’ state of what is waiting inside. In this post, I will go through the current stages of work on a 3D and (optionally) VR desktop for the Arcan display server.
