THE LAYMAN VS PHILOSOPHERS — Era Artificial Intelligence

Seremonia
6 min readMay 2, 2023

--

Photo by Touann Gatouillat Vergos on Unsplash

How far can we rely on artificial intelligence?

If you try using other AI products like ChatGPT, GPT-4 or Claude+, the answers will be neater and more focused, especially with the addition of Google Bard.

Moreover, if corrected by certain communities at some point, and each AI product collaborates with companies, scholars and religious leaders, similar to open source communities, the results will surely be more acceptable.

In fact, today's AI is quite advanced for question and answer. Even if forced to add Aristotelian reasoning systems + symbolic logic + mathematical logic + others, the results will be amazing...

1. Help religious people understand their holy books. Although some principles about God may differ,

  • 👉 ... a universal consensus can be formed that affirms the similarity of wisdom among differences in religious principles.
  • 👉 ... ultimately, with the help of AI, we can find guidance from religions that are still more advanced than those who try to find the truth through general logic.

In short, when they think ethics, morals and values of goodness can be obtained without religion, but from experience or thinking, it turns out that many absolute or relative practical truths are still more advanced and unknown so far by philosophers and scientists.

2. Over time, even school children or college students will increasingly see that their previous concept of reasoning is wrong, compared to AI's way of reasoning.

  • 👉 For example, although there are many answers that can be categorized as nonsense from AI, they realize that the nonsense in the answers is due to data manipulation or data limitations (not due to errors in the reasoning system itself).
  • 👉 Eventually, when existing logical systems are involved in AI, AND EVEN THE DATA COMES FROM THEMSELVES atheist, agnostic freethinker communities and others, they will be slowly hit back and forced to think carefully and sharply to be able to match AI, and show inconsistencies + contradictions in their own knowledge.

IN SHORT, BECAUSE THE REASONING IS SOPHISTICATED, AND INCORRECT ANSWERS CAN OCCUR DUE TO DATA, NOT THE REASONING SYSTEM ITSELF, when there are those who do not like AI's answers, then they use data from themselves, they will be surprised to find chaos in their own knowledge.

And when some genius from atheist, agnostic communities who are adept programmers try to dismantle the AI's reasoning system, they will be surprised because the reasoning standards they have are already included in the existing AI system, but the results go against them.

IN SHORT, THE MORE ADVANCED AI IS, THE MORE IT REVEALS THE ABSURDITY OF THEIR REASONING (atheists, agnostics, freethinkers, etc.), and the more it affirms the truths of religion.

  • 👉 Although the answers to different religious principles may differ or be adjusted to be neutral, when it comes to a universal level, the agreement will be seen.

WHAT IS THE BIG SCENARIO?

In the end, it will hit liberals, atheists, agnostics, freethinkers and others that so far they are NOT JUST THINKING ABSURDLY, BUT ALSO ACTING ABSURDLY. For example:

  • 1. Just talking but in practice almost nothing.
  • 2. Their attitude contradicts their theory. Between theory and practice there is a wide gap that has crossed the limits set by themselves.
  • 3. Rushed in thinking.

... and other absurd attitudes.

IN SHORT, WHEN HUMANS CREATE ARTIFICIAL INTELLIGENCE AUTOMATION SYSTEMS, BUT IT TURNS OUT THAT IT CORRECTS HUMAN INTELLIGENCE ITSELF. WHY IS THAT? Simple...

Human mistakes in thinking, due to fatigue, emotions, conflicts, can be avoided by intelligent machines, which can even see the connection between complex contexts among trillions of data, so that intelligent machines 👉 are no longer just question-and-answer plagiarism, but their reasoning 👉 is able to surpass (the carelessness) of human reasoning, so there will be a wave of surprise for freethinkers who have held it so far, to be recalculated, because? IT EMBARRASSES THEM IN FRONT OF THE DEVELOPMENT OF TECHNOLOGY ITSELF WHICH HITS (questioning) the intellectualism they have boasted so far.

A CONVERSATION MODEL TO UNDERSTAND THIS CASE ...

If dramatized using simple language, it would be like this...

  • Intelligent Machine: "This is my answer"
  • Sir/Miss FAAA (freethinker, absurd, agnostic, atheist): "That's wrong, this is the right one, you have to accept this, blah blah blah"
  • Intelligent Machine: "Thank you for the information, next time I will remember the knowledge from you... blah blah blah"
  • Sir/Miss FAAA (freethinker, absurd, agnostic, atheist): "I repeat, so what is the answer to this... blah blah blah"
  • Intelligent Machine: "This is my answer that I have revised. Blah blah blah"
  • Sir/Miss FAAA (freethinker, absurd, agnostic, atheist): "Yes, only this one is wrong blah blah blah"
  • Intelligent Machine: "Thank you, but isn't that in accordance with what you taught yourself, so that the consequences are like that... blah blah blah"

〰〰〰

👉 The worst thing is, this doesn't even need to wait long, because today's AI reasoning system is already advanced. And even if the answers are protested as inappropriate, facilities (paid) have been provided to use your own data, so as to avoid noise (different opinions from millions of global data).

👉 However, when using their own data, rearranged and limited by themselves, it turns out that the noise comes from themselves. This will surprise them.

〰〰〰

Indeed, every technological development has negative sides in terms of misuse.

However, when AI is misused against logic, that's where they hit the stone.

IN OTHER WORDS: we can make damaging programs, we can make programs that are pro ourselves, but when programming? We still have to follow the strict rules of the programming language, where even writing a semicolon incorrectly will not run the program. With a high level of accuracy (if not perfect).

Likewise, when FAAA tries to manipulate data using their own data, of course the answers are according to their wishes, but they are embarrassed or aware themselves or cover up the carelessness of their own reasoning because the data comes from themselves.

THEN HOW?

Simple, AI will be a severe blow to freethinkers, AI will be a severe blow to the absurd, agnostics and atheists as well.

They must re-adapt their reasoning. Worse yet...

Ordinary people who were previously blind to philosophy and manipulation will be increasingly educated by AI to think carefully and prepare neat data, so that philosophical thoughts that have been difficult for them to understand so far will be helped by AI in an easy to understand explanation format, surpassing language teachers and surpassing philosophers who pretentiously complicate explanations to cover up their own manipulation tricks.

As a result, ordinary people will easily recognize their manipulation in debate, and will easily laugh at philosophers who are not careful in revealing their thoughts.

It's true, weaknesses in AI can't make them find the truth which can sometimes get caught up in hoax news, but on the other hand, they are starting to realize how advanced AI reasoning is, and make them realize one simple thing...

That it is so good that AI is able to respond to simple questions that are so easy for humans, and in fact it can be done by intelligent machines because it involves reasonable reasoning.

In short, "many educated lay people reason correctly" who will become rivals for philosophers themselves.

〰〰〰

It's like a calculator, although it's not perfect, but with the addition of a scientific calculator + graphical calculations, it can calculate with certainty.

Likewise when AI is still ambiguous, but more or less shows definite reasoning

The layman becomes the attacker, the philosopher becomes the defender. Balanced

SO IN CONCLUSION FROM ME? WELCOME THE CHALLENGERS OF FUTURE FOR PHILOSOPHERS.

--

--

Seremonia
Seremonia

No responses yet