peicaili
peicaili|Jul 10, 2025 23:48
Weekly Journal 0711 This week's thinking mainly revolves around several questions, one is how to determine whether an explanation is a circular argument. This topic was first mentioned when reading Popper's book. In the early days, Popper believed that the theory of evolution had the suspicion of circular reasoning, because when we need to explain the current biological world using the principles of natural selection and survival of the fittest, there is a certain flavor that those who survive are adaptive, and those who do not survive are maladaptive, which can explain everything and replace reasons with results. The first time I realized this topic was when Chattpt used Coase's Law to explain a historical event to me. At first, I was quite happy and felt that it was indeed a good perspective, with a bit of first principles meaning. But after questioning a few times, I realized it wasn't that simple, and soon realized that it had a bit of a circular argument - when a social policy has a positive effect, we say it reduces social transaction costs, and vice versa. The next step is to explore how to solve this problem. First, we asked Chattgpt, who admitted that the problem was not very easy to solve. Then we looked at how Popper changed his attitude towards evolution later on, which gave us some inspiration, but it was still not so clear. But the general direction is to extract this model from specific examples and clarify its logical chain and applicable boundaries. Both of these should meet the criteria of good interpretability - difficult to change and falsifiable. Another issue is how Haibo mentioned in the group how to identify whether an encountered explanation is a good explanation or a bad explanation. This question is actually related to the previous one, as many bad explanations are typically characterized by circular reasoning. Haibo's idea is quite inspiring. He said that we need to first transform our point knowledge into a knowledge network, so that when there is a conflict between external opinions and our own network, we will realize that this is a bad explanation. How can one transfer their knowledge points out of the knowledge network? Another inspiration I have gained these days is the creative principle given by Tang Zhi in the expert's black box: when building abstract frameworks, we should use novel assumptions to overturn classic models, which can create conflicts and make us ask good questions; When answering specific questions, it is important to connect classic models with unique perspectives, which can help clarify confusion and identify good answers. This principle seems quite useful. The first one is more suitable for you to truly understand a knowledge point (many of which are classical models such as Coase's Law). There are also two approaches here 1. You can try applying a classic model to a specific case, especially to problems that you find difficult. For example, when I tried to use Coase's law to analyze the policy of centralized procurement of medicine, it created a lot of confusion and conflicts. This will help clarify the applicable conditions and boundaries of the model. 2. Trying to apply two classic models (especially those that conflict a bit) to the same specific case can also create confusion and help clarify the applicable conditions and boundaries of the models. For example, these days I have been trying to put Bayesian models and reflexivity models into one example to understand. For example, when an investment strategy helps us make money, should we give more weight to the Bayesian model and put more money into it, or should we follow the reflexivity model and put less money into it? (There is a specific example where Lijia gave up the strategy of investing in memes after making money on Trump coins. In hindsight, his strategy was undoubtedly correct.) After repeatedly tinkering with these classic models, you feel like you have connected knowledge from points to a network. I feel that clarifying classic models that are somewhat contradictory and conflicting is the most important thing when building a knowledge network. The second principle helps us clarify whether an explanation thrown out by others is a good explanation. If an explanation is not connected to a classical model, it is often an assertion or nonsense (we can assume that ordinary people do not have the ability to propose a new classical model, and it is also difficult to find a problem that has not been thought about by predecessors, which is the same as what Doig said about building a bad explanation being easy and building a good explanation being difficult). Even if we have experience tinkering with a classical model, we can clearly know whether it has been used under the correct boundary conditions.
Share To

HotFlash

APP

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads