Dear Friends of Realtime 3D Graphics, 3D Web, Enternet, Metaverses and similar topics!
I am not sure, whether it is important, but during the last weeks and months I have sent some signals via this Blog that could eventually lead to serious misconceptions.
First, and most important: the statements given in following blog posting are still valid!!!
That is, in detail:
I have prepared for “Plan A” (meanwhile abandoned): convince my employer to start projects about what-I-call “Integrated 3D Collaboration” (integrated with NGN, integrated with MCN, 3GPP/ETSI based)
I have prepared for “Plan B”: support “Integrated 3D Collaboration” on my own, based on IETF/W3C/Web3D (still desired during my spare time)
I have prepared for “Plan C”: abandon any professional support of “Integrated 3D Collaboration” (finally selected)
This “Decision for Plan C” was made public on my Facebook account.
Anyway: if I stumble over some reliable indication that tells me something equivalent to my “Plan A” has been started somewhere within the industry, then I will “officially start” my project “Fiat-A” as an “encoded hint bit” for the Web3D community. This statement is still true, although I have officially abandoned my “personal Plan A”.
The above statement about the project “Fiat-A” is independent of an optional foundation of a “Society for Lifelong Collaborating”, which might happen any time without any implication, and it is independent of the S&P-ARK project, which has been started in spring 2022 without any reason.
I think, I could have caused some confusion with those two initiatives.
The following blog posting indicated the start of the DIGITS/S&P-ARK/ALPES project. I am still not sure, and I will have to define, whether this project is still a “template” project (i.e. not a REAL project, but a project, which I use to report my opinion about projects that *should* be started by the community) or not.
The following blog posting should be completely ignored, discussion is not yet closed.
I have tried to explain that the identity of an object cannot be strictly derived by physical laws.
No, the identity of the objects is created by the physicists, when they do, what we call “modelling”.
The questions e.g. “what is an electron?”, “what is a positron?” are not physical questions per se. They are questions that are answered during the process of modelling, BEFORE we create the mathematical laws of physics, which then in turn describe the interactions of the objects.
Now one could ask: “How can we dare to think the universe IS a GROUPING OF OBJECTS AND INTERACTIONS?”
Well, it’s OUR way of how we see the universe, how we PERCEIVE the universe, and this way was successful in the past.
This or a similar answer would be given by a biologist, who applies the wisdom of theory of evolution.
And as we expect that one individual person would not find answers that are completely contradictive to all that, what has been found by mankind before, it actually happened to me that I found the same – or at least similar – answers, when I dealt with the implementation of the experimental SMUOS Framework.
Now you might ask: “What has the implementation of a 3D Multiuser Framework in common with the thinking about epistemology?”
Well, a lot. Which I’d like to explain in the present posting.
The SMS Framework
Well, let me start with an explanation of the acronym SMS = Simple Multiuser Scene.
My intention was, to set a counter point to the term of MMORPG (Massively Multiplayer Online Role Playing Game).
The term “Simple” should indicate: SMS are intended to be used by small(!) groups of people (e.g. five or ten people). This does not preclude to re-use the same scene for several or many groups of people, but only small groups, a few people each group, would actually “meet” in the scene.
Furthermore, the term “Simple” should indicate, we would only use stable, rather old, but commonly and freely available techniques of rendering (e.g. by employing the X3D/VRML standards).
Now, what is the basic idea of the SMS Framework?
The SMS Framework should/will be an intermediate layer, which implements functions that are not (yet) available in the X3D/VRML standards, but need to be available in each and every multiuser scene.
Thus the SMS Framework should help the authors of multiuser scenes to save common efforts.
When we come back to the example of “Indirect Reality”, how would it look like?
Well, when we remember our example of the robot (RLA, i.e. Real Life Avatar) that is controlled via a VR headset and VR controllers (PSI, i.e. Personal Scene Instance) and via the Internet,
then we could generalize the robot into a robot that needs more than one person, who control the robot (e.g. a submarine that is controlled by Alice, Bob and Charlie):
What can we see in Figure 2:
Besides the three PSIs (Personal Scene Instances) for Alice, Bob and Charlie, we got another Scene Instance, the “Interface to Reality” (ITR), which handles the communication with the “real submarine”.
Neither Alice, nor Bob nor Charlie can know, whether the ITR is connected to a “real” submarine, or the ITR is just a simulator that simulates a submarine.
All four scene instances (PSIs and ITR) run the same Scene (red colour), which has probably been downloaded from the ITR.
The SMS-FW (SMS Framework) is not actually a necessary part for our philosophical considerations, it could be replaced by some functions of the common Scene (red colour).
We see two types of Internetworking in this Figure 2 (I call them the “3D Web” and the “Enternet”):
“3D Web”: The common Scene (red colour) must have been downloaded from some common server(s)
“Enternet”: The scene instances must be somehow SYNCed and they must be SYNCed with the RR (black arrows)
What’s the Connection to Epistemology?
Well, in our example the “Common Scene” (red colour) has been downloaded from some common server.
Cannot we take this as a metaphor for our “common model of the universe” (i.e. for our science), which we have “downloaded” during our education from school and university?
The “Common Scene” defines, which aspects of the submarine and its surroundings can be perceived by Alice, Bob and Charlie. Additionally, each user has a “Personal Scene Instance” (PSI), i.e. a computer, some headset and so on, that specifically influences the perceived reality during perception.
Cannot we take this as a metaphor for our personal “Model of the Universe (MotU)”, which we “carry” in our mind and which influences the way, we can perceive the universe?
These considerations led me to think about epistemology during the years of 2014 to 2018 and write a few “religious booklets” in German language.
Please don’t be shocked. This posting will be about some philosophy.
This is not a blog posting about science of nature, nor about science of technology, it could even be interpreted as a religious posting.
Hence, this posting is a temporary contradiction (let’s say an exception according to Heisenberg) to my principle about keeping this blog an agnostic blog.
If you cannot accept this, then please ignore this posting 🙂 .
Perhaps this blog is not the right place for a discussion about soul/body/mind/spirit, but I have to say that I’m a programmer and therefore I am keen to layer any system – from top to bottom – it’s an occupational disease of us programmers.
Thus it happened to me that I once had an idea about layering the whole universe.
Since we already learn in basic training that the Internet is broken down into 5 layers, namely in
application layer (L5),
transport layer (L4),
network layer (L3),
logical link layer (L2) and
bit transmission layer (L1),
and because I was a bit concerned with virtual worlds in my hobby, one day it “happened” to me that I had the following thoughts:
Let’s assume I operated a robot using a VR headset and VR controllers.
So I would “take on the role” of the robot (which I call ‘Real Life Avatar (RLA)’) “through” a virtual reality (which I call ‘Personal Scene Instance (PSI)’).
The robot’s electronic eyes would become “my remote eyes”
The robot’s arms, hands, legs and feet would become “my remote limbs”
and so on
generally spoken: the robot (RLA) would provide me some “remote senses and skills (rSaSk)” via the Internet and via the PSI (and via the “User Interface (UI)” of the PSI)
Wouldn’t we call this “Tele Presence” or, maybe, “Indirect Reality“?
Tele Presence = I am present “in a remote way” (as seen by others)
Indirect Reality = The reality is present “in a remote way” (as seen by me)
Wouldn’t this look as follows, if we drew the layers and the entities?
That is why I have drawn four entities in figure 1.
First, there is the person who has inherited and learned their “mind” and their “senses and skills” (SaSk) throughout life and thus is now able to “understand” and “grasp” the universe, respectively, in a more or less correct way.
The Real Life Avatar (RLA, i.e. the robot) and the Personal Scene Instance (PSI, i.e. the VR headset and the VR controllers, controlled by a personal computer) are the second and the third entity in this figure. They will provide the person with “Remote Senses and Skills” (rSaSk) via the “User Interface” (UI).
Since person+PSI and RLA are sometimes located in places far away from each other, the Internet is usually in between.
which I will come back to in a moment, was not “brain-born” but came about “naturally”.
How to get from the Concept of “Identity” to “Layer -1”.
We explained that the concept of “identity” does not necessarily result from physical laws (I do not repeat this arguement here).
Consequently, “Layer 0” in Figure 1 is – provisionally – drawn as a continuous layer without boundaries between the entities. The laws of physics do NOT mandate a specific demarcation of entities.
On the contrary, we know from the concept of “modeling” that people intuitively draw the boundaries of the system and boundaries within the system before they begin to formulate physical laws.
Is this demarcation of entities arbitrary, or is there something like a preferred demarcation? So can one say that one model is “more correct” than another?
So we define a “Layer -1” in which we settle the thing-in-itself according to Immanuel Kant and also the essence or the soul of people.
So in this “Layer -1” the division of the universe into physical objects (and subjects) happens and that’s where everything happens that really, actually, truthfully happens.
So the physics in this PICTURE (and it’s just a PICTURE) is already an INTERPRETATION of reality, it doesn’t describe reality IN ITSELF.
So we now draw our 4 entities from the example (the person (me), the Personal Scene Instance (PSI, i.e. the VR headset and the VR controllers, controlled by a personal computer), the Internet (here a cable) and the Real Life Avatar (RLA, i.e. the robot)) in 8 layers:
HW means hardware TII means thing-in-itself (according to Immanuel Kant)
What the person can say about themselves
I am (layer -1)
I am my body (Layer 0)
I am a body with senses and abilities (Layer 1-5)
I am a body with intelligence/mind (Layer 6)
Outlook to the Next Posting
In the next posting, I will try to combine this “theory of SMS (Simple Multiuser Scenes)” – see above – with my “Small religious booklet No. 13” (see https://letztersein.com/kleine-religiose-buchlein), which is called “Models of the Reality” (written in English language).