Diesmal, mit Newsletter Nr. 88, möchte ich einen Aufruf machen.
Wie Ihr wisst, geht es mir um Nachhaltigkeit im besten Sinne, und eine der wichtigsten Strategien, um nachhaltig zu leben, ist es m.E. den Frieden zu halten.
Den inneren Frieden mit uns selbst und Gott, sowie den äußeren Frieden mit den “lieben” Mitmenschen.
Wir betreiben einen Krieg gegen die nächste Generation. Abgetrieben zu werden, ist mittlerweile eine der häufigsten Todesursachen, häufiger noch, als bei einem Verkehrsunfall ums Leben zu kommen.
Obwohl ja manche Menschen versuchen, Abtreibungen als eine Art “Verkehrsunfall” abzutun 😮
Mein geplanter Verein, die “Society for lifelong collaborating”, ist nun ein Friedensprojekt, ein Platz im Web, an dem sich die “ganze Welt treffen kann”, und ich meine wirklich die GANZE Welt, inklusive China, Rußland und Afrika.
Es ist noch nicht klar, welche technischen Hilfsmittel wir verwenden werden, seien es VPNs oder gar TOR Browser, jedenfalls werden in der Anfangszeit des Vereins Leute mit IT Background dringend benötigt werden.
Meldet Euch doch. Am dringendsten brauchen wir einen Vereinsobmann, den Kassier und den Schriftführer kann ich selber geben.
Dear Friends of Realtime 3D Graphics, 3D Web, Enternet, Metaverses and similar topics!
I am not sure, whether it is important, but during the last weeks and months I have sent some signals via this Blog that could eventually lead to serious misconceptions.
First, and most important: the statements given in following blog posting are still valid!!!
I have prepared for “Plan A” (meanwhile abandoned): convince my employer to start projects about what-I-call “Integrated 3D Collaboration” (integrated with NGN, integrated with MCN, 3GPP/ETSI based)
I have prepared for “Plan B”: support “Integrated 3D Collaboration” on my own, based on IETF/W3C/Web3D (still desired during my spare time)
I have prepared for “Plan C”: abandon any professional support of “Integrated 3D Collaboration” (finally selected)
This “Decision for Plan C” was made public on my Facebook account.
Anyway: if I stumble over some reliable indication that tells me something equivalent to my “Plan A” has been started somewhere within the industry, then I will “officially start” my project “Fiat-A” as an “encoded hint bit” for the Web3D community. This statement is still true, although I have officially abandoned my “personal Plan A”.
The above statement about the project “Fiat-A” is independent of an optional foundation of a “Society for Lifelong Collaborating”, which might happen any time without any implication, and it is independent of the S&P-ARK project, which has been started in spring 2022 without any reason.
I think, I could have caused some confusion with those two initiatives.
The following blog posting indicated the start of the DIGITS/S&P-ARK/ALPES project. I am still not sure, and I will have to define, whether this project is still a “template” project (i.e. not a REAL project, but a project, which I use to report my opinion about projects that *should* be started by the community) or not.
Please don’t be shocked. This posting will be about some philosophy.
This is not a blog posting about science of nature, nor about science of technology, it could even be interpreted as a religious posting.
Hence, this posting is a temporary contradiction (let’s say an exception according to Heisenberg) to my principle about keeping this blog an agnostic blog.
If you cannot accept this, then please ignore this posting 🙂 .
I have tried to explain that the identity of an object cannot be strictly derived by physical laws.
No, the identity of the objects is created by the physicists, when they do, what we call “modelling”.
The questions e.g. “what is an electron?”, “what is a positron?” are not physical questions per se. They are questions that are answered during the process of modelling, BEFORE we create the mathematical laws of physics, which then in turn describe the interactions of the objects.
Now one could ask: “How can we dare to think the universe IS a GROUPING OF OBJECTS AND INTERACTIONS?”
We think to know the universe is rendered by the phenomena of matter / energy and of spacetime, but what the hell gives us the right to split matter / energy and spacetime into “objects and interactions”?
Well, it’s OUR way of how we see the universe, how we PERCEIVE the universe, and this way was successful in the past.
This or a similar answer would be given by a biologist, who applies the wisdom of theory of evolution.
And as we expect that one individual person would not find answers that are completely contradictive to all that, what has been found by mankind before, it actually happened to me that I found the same – or at least similar – answers, when I dealt with the implementation of the experimental SMUOS Framework.
Now you might ask: “What has the implementation of a 3D Multiuser Framework in common with the thinking about epistemology?”
Well, a lot. Which I’d like to explain in the present posting.
The SMS Framework
Well, let me start with an explanation of the acronym SMS = Simple Multiuser Scene.
My intention was, to set a counter point to the term of MMORPG (Massively Multiplayer Online Role Playing Game).
The term “Simple” should indicate: SMS are intended to be used by small(!) groups of people (e.g. five or ten people). This does not preclude to re-use the same scene for several or many groups of people, but only small groups, a few people each group, would actually “meet” in the scene.
Furthermore, the term “Simple” should indicate, we would only use stable, rather old, but commonly and freely available techniques of rendering (e.g. by employing the X3D/VRML standards).
Now, what is the basic idea of the SMS Framework?
The SMS Framework should/will be an intermediate layer, which implements functions that are not (yet) available in the X3D/VRML standards, but need to be available in each and every multiuser scene.
Thus the SMS Framework should help the authors of multiuser scenes to save common efforts.
When we come back to the example of “Indirect Reality”, how would it look like?
Well, when we remember our example of the robot (RLA, i.e. Real Life Avatar) that is controlled via a VR headset and VR controllers (PSI, i.e. Personal Scene Instance) and via the Internet,
Figure 1: Remotely controlling a robot via VR headset and controllers,
then we could generalize the robot into a robot that needs more than one person, who control the robot (e.g. a submarine that is controlled by Alice, Bob and Charlie):
Figure 2: Simple Multiuser Scene / Simple Multiuser Session (SMS)
What can we see in Figure 2:
Besides the three PSIs (Personal Scene Instances) for Alice, Bob and Charlie, we got another Scene Instance, the “Interface to Reality” (ITR), which handles the communication with the “real submarine”.
Neither Alice, nor Bob nor Charlie can know, whether the ITR is connected to a “real” submarine, or the ITR is just a simulator that simulates a submarine.
All four scene instances (PSIs and ITR) run the same Scene (red colour), which has probably been downloaded from the ITR.
The SMS-FW (SMS Framework) is not actually a necessary part for our philosophical considerations, it could be replaced by some functions of the common Scene (red colour).
We see two types of Internetworking in this Figure 2 (I call them the “3D Web” and the “Enternet”):
“3D Web”: The common Scene (red colour) must have been downloaded from some common server(s)
“Enternet”: The scene instances must be somehow SYNCed and they must be SYNCed with the RR (black arrows)
What’s the Connection to Epistemology?
Well, in our example the “Common Scene” (red colour) has been downloaded from some common server.
Cannot we take this as a metaphor for our “common model of the universe” (i.e. for our science), which we have “downloaded” during our education from school and university?
The “Common Scene” defines, which aspects of the submarine and its surroundings can be perceived by Alice, Bob and Charlie. Additionally, each user has a “Personal Scene Instance” (PSI), i.e. a computer, some headset and so on, that specifically influences the perceived reality during perception.
Cannot we take this as a metaphor for our personal “Model of the Universe (MotU)”, which we “carry” in our mind and which influences the way, we can perceive the universe?
These considerations led me to think about epistemology during the years of 2014 to 2018 and write a few “religious booklets” in German language.
Please don’t be shocked. This posting will be about some philosophy.
This is not a blog posting about science of nature, nor about science of technology, it could even be interpreted as a religious posting.
Hence, this posting is a temporary contradiction (let’s say an exception according to Heisenberg) to my principle about keeping this blog an agnostic blog.
If you cannot accept this, then please ignore this posting 🙂 .
Dear Reader!
Perhaps this blog is not the right place for a discussion about soul/body/mind/spirit, but I have to say that I’m a programmer and therefore I am keen to layer any system – from top to bottom – it’s an occupational disease of us programmers.
Thus it happened to me that I once had an idea about layering the whole universe.
Since we already learn in basic training that the Internet is broken down into 5 layers, namely in
application layer (L5),
transport layer (L4),
network layer (L3),
logical link layer (L2) and
bit transmission layer (L1),
and because I was a bit concerned with virtual worlds in my hobby, one day it “happened” to me that I had the following thoughts:
Let’s assume I operated a robot using a VR headset and VR controllers.
So I would “take on the role” of the robot (which I call ‘Real Life Avatar (RLA)’) “through” a virtual reality (which I call ‘Personal Scene Instance (PSI)’).
The robot’s electronic eyes would become “my remote eyes”
The robot’s arms, hands, legs and feet would become “my remote limbs”
and so on
generally spoken: the robot (RLA) would provide me some “remote senses and skills (rSaSk)” via the Internet and via the PSI (and via the “User Interface (UI)” of the PSI)
Wouldn’t we call this “Tele Presence” or, maybe, “Indirect Reality“?
Tele Presence = I am present “in a remote way” (as seen by others)
Indirect Reality = The reality is present “in a remote way” (as seen by me)
Wouldn’t this look as follows, if we drew the layers and the entities?
Figure 1: Indirect Reality / Telepresence
That is why I have drawn four entities in figure 1.
First, there is the person who has inherited and learned their “mind” and their “senses and skills” (SaSk) throughout life and thus is now able to “understand” and “grasp” the universe, respectively, in a more or less correct way.
The Real Life Avatar (RLA, i.e. the robot) and the Personal Scene Instance (PSI, i.e. the VR headset and the VR controllers, controlled by a personal computer) are the second and the third entity in this figure. They will provide the person with “Remote Senses and Skills” (rSaSk) via the “User Interface” (UI).
Since person+PSI and RLA are sometimes located in places far away from each other, the Internet is usually in between.
which I will come back to in a moment, was not “brain-born” but came about “naturally”.
How to get from the Concept of “Identity” to “Layer -1”.
We explained that the concept of “identity” does not necessarily result from physical laws (I do not repeat this arguement here).
Consequently, “Layer 0” in Figure 1 is – provisionally – drawn as a continuous layer without boundaries between the entities. The laws of physics do NOT mandate a specific demarcation of entities.
On the contrary, we know from the concept of “modeling” that people intuitively draw the boundaries of the system and boundaries within the system before they begin to formulate physical laws.
Is this demarcation of entities arbitrary, or is there something like a preferred demarcation? So can one say that one model is “more correct” than another?
The fact that through discussion and persuasion – without using violence – we always find models that are (almost) universally recognized throughout humanity, and also the fact that we dare to do science and technology at all suggests that there are preferred demarcations in physics and in nature, at least in relation to humanity.
So we define a “Layer -1” in which we settle the thing-in-itself according to Immanuel Kant and also the essence or the soul of people.
So in this “Layer -1” the division of the universe into physical objects (and subjects) happens and that’s where everything happens that really, actually, truthfully happens.
So the physics in this PICTURE (and it’s just a PICTURE) is already an INTERPRETATION of reality, it doesn’t describe reality IN ITSELF.
So we now draw our 4 entities from the example (the person (me), the Personal Scene Instance (PSI, i.e. the VR headset and the VR controllers, controlled by a personal computer), the Internet (here a cable) and the Real Life Avatar (RLA, i.e. the robot)) in 8 layers:
Figure 2: Introduction of “Layer -1”
HW means hardware TII means thing-in-itself (according to Immanuel Kant)
What the person can say about themselves
I am (layer -1)
I am my body (Layer 0)
I am a body with senses and abilities (Layer 1-5)
I am a body with intelligence/mind (Layer 6)
Outlook to the Next Posting
In the next posting, I will try to combine this “theory of SMS (Simple Multiuser Scenes)” – see above – with my “Small religious booklet No. 13” (see https://letztersein.com/kleine-religiose-buchlein), which is called “Models of the Reality” (written in English language).
Have a nice week
Yours Christoph
P.S.: (this text has been taken from the 12th booklet “Geist – Sinne – Körper – Seele” at https://letztersein.com/kleine-religiose-buchlein – German language – and then translated by Google translate)
Yesterday, I had an intriguing discussion with my friend Markus S., about an old dilemma of my X3D hobby projects: what comes first? The chicken or the egg?
This dilemma has haunted me since I wrote the following email in 2011, at the x3d-public mailing list:
In that e-mail I compared the “elaborated network sensor concept” of the Web3D Consortium with the egg and the “interest from the telecom industry” with the chicken.
Well, Markus corrected me:
The chicken is not only the interest from the telecom industry, but it is “the industry at all” that counts
It was not really OK to start the S&P-ARK project on 2022-03-24, but chicken and egg should “come together”. It is NOT the egg that comes first.
We should at least wait to create the “society for lifelong collaborating”, until the interest can be counted in Euro or Dollar
Well, I told you that ALPES, the “Experimental Setup of the ALP” is the only planned content of the S&P-ARK project, currently.
We want to perform experiments with an “Application Layer Protocol (ALP)” for the Web3D Network Sensor.
Therefore we plan to implement an “Experimental Login Server Application (ELSA)”.
Last weekend I found the time to draw a little drawing about the architecture of ELSA and I added it to the WIKI page “ELSA” of the S&P-ARK WIKI: https://github.com/christoph-v/spark/wiki/ELSA
Here the picture as a shortcut (without explanation):
Have a nice week
Yours Christoph
P.S.: in case of any questions, comments, improvement opportunities and so on, please feel free to add a comment. Your e-mail address will never be published.
Probably, we will call the project by its short name, ALPES, in the future.
Nevertheless, we will continue to use the S&P-ARK repository: https://github.com/christoph-v/spark, where we also felt free to “define” our vision for reference.
At my latest posting, I told you that I had closed the work on the narration “The third child” by 2022-04-15.
Today, 2022-05-14, I can tell you: I have also updated the English translation of the narration for version 1.7 (mostly, I have removed the unnecessary “frame story” and updated the version number).