PHILOSOPHY, SCIENCE & HALLUCINATIONS - Artificial Intelligence

Seremonia
3 min readOct 12, 2023

--

Creative but hallucinatory, because its basic function is to "cover the gap" (cover anomalies) with statistical data following common habits. It is certain to make mistakes in picking up "words" or "news" due to similarities.

📌 Link

This has been tried to be overcome by IBM Watson and others with strategies involving "hierarchy" and "cause-and-effect relationship structure." HOWEVER❓

It only reduces and does not significantly resolve. They are all aware, even though there is a spirit of joy in achieving long-awaited breakthroughs.

They are aware that there is a "major" (critical) weakness, not a "minor" (light) weakness. WHERE IS IT❓

Simple. Besides having to create standardization through a cause-and-effect hierarchy (which has been recognized for a long time and tried to be used in the current model, even tried to be adapted as an example "hierarchy transformers"), there is one confusion.

WHY IS IT STILL DELUSIONAL?

HIERARCHY SYSTEM - FAILED CAUSE-EFFECT

Why can it fail? Because they all previously focused on science, leaving philosophy far behind, so their steps tended to be practical. One example involves using big data (cheating) to quickly grasp things, but forgetting the fundamental meaning.

However, when the scientist attempts to reconnect with their close colleagues, the philosophers (whom they had abandoned due to vague polemics), they realize that it's time to monitor the failures (hallucinations) in the robot's understanding system and seek help from the philosophers...

It turns out that their old generation philosophers are still embroiled in polemics, and even when trying to help scientist robots find the philosophical root of hallucination detection failures, they also face difficulties.

HIERARCHY SYSTEM - FAILED CAUSE-EFFECT

Basically, they must apply a hierarchy structure of cause and effect. So, why did it still fail❓

Because their elder siblings, the philosophers, still have limited understanding of the cause-effect hierarchy structure. It needs to be distinguished from "logical consequences" that overlap with "cause and effect"

👉 Furthermore, the concepts of "cause and effect" and its "logical consequences" are still shallow. There is depth that they have not yet realized

〰 There are several levels of cause-effect hierarchy models and "logical consequence" models that are layered. And this is not known in academic scholarship because❓

Once again, when science left behind the philosophers and is now looking back at them, unfortunately, they both seem to have forgotten about "religion" as their sibling

So, what's the result❓They don't know that, in fact, religion holds a treasury of formulations about the concept of the cause-effect-logical consequence hierarchy needed all this time.

DATA SINGULARITY

Data singularity is actually when all forms of reasoning formulations are abandoned.

Communication & Labeling

All logical formats are only used to communicate to us for easy tracking - marking - labeling, but in practice, they are not placed in AI systems, and instead, they move towards a singular foundation of reasoning through a more advanced "cause-and-effect" and "logical consequence" system - layer by layer, until the level that can understand quantum physics.

A depth of META reasoning that can only be achieved through the collaboration of scientists, philosophers, and religious scholars.

Something definite is happening, or they are experiencing continuous delusional failures, even though they are happy with their achievements so far, but they truly begin to worry.

THIS IS THE TIME WHEN ABSOLUTE TRUTH - UNIVERSAL & AXIOMATIC REASONING, BEGINS TO BE UTILIZED.

IT'S TIME FOR THE SYNERGY of Science + Philosophy + Religion to come together, Amen.

--

--

Seremonia
Seremonia

No responses yet