Based on the provided sources, Gödelian incompleteness alters the ideas on entropy primarily by introducing a fundamental limit to the calculation and definition of Algorithmic Entropy (or Kolmogorov-Chaitin complexity). This shift moves entropy from a probabilistic measure of a system’s state to a measure of the “compressibility” of its description, which turns out to be mathematically unprovable in certain contexts.

Here is how Gödelian incompleteness impacts the understanding of entropy:

1. The Impossibility of Calculating “True” Algorithmic Entropy

In the context of Algorithmic Information Theory (AIT), the entropy of an object (or state) is defined as the length of the shortest computer program required to generate a complete description of that object[1].

The Gödelian Constraint: Volkenstein explains that as a consequence of Gödel’s incompleteness theorem, it is generally impossible to prove the minimality of a given program. One can never rigorously prove that a shorter program does not exist to generate the same sequence[2].

Undecidability: Consequently, the algorithmic entropy of a specific string or physical state is non-computable or undecidable[3][4]. We can find an upper bound (the length of the shortest program we have found so far), but we cannot calculate the exact value because we cannot prove a shorter algorithm is impossible[2].

2. Redefining Randomness and Order

Gödelian incompleteness blurs the line between “random” (high entropy) and “ordered” (low entropy) systems when viewed through the lens of algorithmic complexity.

Apparent Randomness: Zurek notes that a system might appear random (high statistical entropy) but actually possess a concise description (low algorithmic entropy), such as the binary expansion of π. Ideally, we would classify this as ordered. However, due to Gödelian undecidability, there is a difficulty in assessing this complexity; we may simply fail to discover the underlying algorithm[5].

Unprovable Randomness: It follows that one cannot prove a string is truly random. To prove a string is random, one would have to prove that no program significantly shorter than the string itself exists to generate it. Gödel’s theorem prevents this proof for sufficiently complex systems[2].

3. Impact on Physical Laws and Logic

The injection of Gödelian incompleteness implies that the deductive method in physics has limits regarding complexity and entropy.

Limits of Deduction: Volkenstein cites the mathematician Yu. I. Manin, noting that Gödel’s theorem implies that the deductive method is not sufficiently powerful to derive all truths, even in finite systems. This suggests that in calculating the complexity (entropy) of the world, logical derivation is inadequate for establishing the “minimal” laws or programs[6][7].

The Role of Intuition: Because logic cannot mechanically establish the minimality of a description (and thus the true entropy), Volkenstein argues that intuition becomes indispensable in scientific discovery. The “true” entropy of a system might be guessed, but not deduced[7].

4. Computational Irreducibility in Nature

Recent work discussed in the sources connects Gödelian concepts to the physical nature of Earth and biological systems.

Non-Computable Complexity: Rubin argues that the complexity of Earth’s systems involves “impredicativities” (self-referencing definitions) that are essentially non-algorithmic and cannot be simulated by Turing machines[8]. This implies that the entropy and dynamics of such self-referential systems fall outside the scope of standard computation, aligning with the limitations exposed by Gödel.

Limits of Computable Measures: Zenil and Low argue that no computable measure of complexity (including Shannon entropy) can test for all computable regularities. Therefore, standard entropy measures might misrepresent the causal likelihood of a structure because they cannot detect all forms of algorithmic regularity due to these fundamental logical limits[9][10].

Summary

Gödelian incompleteness transforms entropy from a purely statistical property into a quantity that is epistemologically bounded. It asserts that while a “true” measure of disorder (based on the shortest description) may exist in principle, it is fundamentally impossible for an observer to calculate it with certainty. We can never know for sure if a system is truly high-entropy (random) or if we simply lack the cleverness to find the short algorithm that generates it.