The SENTINEL system used in QC+ is the same system used to analyze sentences in META Persona, further...
It has been developed to cover a 3-dimensional Cartesian diagram, allowing it to detect causal relationships, intensity, and main points of a discussion.
This doesn't involve any known AI system.
The advantage is its ability to detect gaps and determine substitutions.
IF STRENGTHENED WITH A POWERFUL SERVER SYSTEM OR SUPERCOMPUTER, IT CAN DETECT CONTRADICTION NODES AND THEIR PARADOXICAL NODES.
At its deepest level, this system behaves much like human reasoning.
A simple example is shown below:
SENTINEL AXIOMATIC REASONING
If the sky is dark [yes], then light rain falls [half], and it doesn't confuse [no]
QC+:
66.66666666666666%
At another time, it goes like this...
If the sky is not too dark [half], but there's no rain [no], it causes confusion [yes]
QC+:
41.66666666666667%
It is concluded that... confusion arises not because of rain but due to unclear situations (41%, not close to 66%)
SIGNAL WEIGHTING SIMULATION
Why Do We Need to Learn Reasoning When We Can Think Already❓
Imagine if we learn, observe, and signal meanings with similar strength, creating an impression of context even when distant.
Overlap. While AI now mimics signal weighting through imbalanced weighting. Where is the imbalance?
The imbalance lies in the weighting of labels based on statistics, not on complete cause-and-effect relationships, leading to hallucination.
〰〰〰
ADAPTATION vs. GENERALIZATION
IN SHORT...
AI NOW... gathers data to capture frequently used rules by accepting unexamined assumptions, leading to inconsistency.
In QC+, collected data isn't to understand patterns that can narrow or broaden with more data, but to find strong boundaries.
In simpler terms, AI now adapts and gets trapped following the data. As data changes, so do answers:
With Sentinel, more data converges towards generalization.
So adaptation is fine, but don't get lost in fluctuations.
WEIGHTING DIMENSIONS
The issue is, weighting in AI now is limited, manual, not automatic.
Yet the strength of signals (like beacons) isn't just "no/run/half/yes" and so on.
We could have millions of ambiguities (somewhat, fairly, half, almost, similar, a bit more, nearly there, etc.) formed automatically based on experience. Especially in times of trauma (signals stronger, almost instantaneous within seconds).
Jumping in Thought
In META Sentinel, as all intensities are measured, jumps in signals (joy, anxiety, panic, trauma) are detected directly in certain word nodes representing meanings, enabling leaps in generalization and spontaneity.
Hence, analyzing chess move strategies requires a detailed analysis of each possible move by a computer.
But humans can suddenly make decisions, as unconscious neural detection occurs, sensing conflicts manifesting as strong emotions (impulses).
Not to mention interruptions from dimensions beyond nerves (intuition) that influence nerves, causing them to send signals to relevant nodes.
In short...
AI NOW HAS LIMITATIONS IN UNDERSTANDING DELICATE CONTEXTUAL BOUNDARIES.