Elegance vs Parsimony (Philosophical Distinction)

  Рет қаралды 1,898

Carneades.org

Carneades.org

Күн бұрын

Пікірлер: 20
@wafflepotato
@wafflepotato Жыл бұрын
you deserve so many more subs. its so hard to find good educational analytic philosophy content on youtube
@hearteyedgirl
@hearteyedgirl Жыл бұрын
Just found this channel, and immediately loving it
@Pfhorrest
@Pfhorrest Жыл бұрын
On the principle of explosion, something like I said on an earlier video in this series seems to apply: making fewer assumptions effectively exposes you to lesser epistemic risk, which is the exact same thing as being more probable, in that you are less likely to turn out to be wrong. I'm a little unclear from your exposition here if the problem being put forth is that any undefended assumption effectively *is false* and having fewer such "false" assumptions isn't any better than having many of them because of the principle of explosion. That seems absurd to me, the kind of conclusion that only a greedy or radical skepticism would suggest, that everything is by default unwarranted and to be rejected until reason is found compelling its acceptance, in contrast to a critical rationalism whereby any assumption is warranted and acceptable (as in epistemically permissible, but not epistemically obligatory) until reason is found compelling its rejection. But on that critical-rationalist approach, making more assumptions exposes you to more possibility of some reasons being found to compel the rejection of some of them, so if your theory depends on a large number of assumptions all being true together, it's exposed to a lot of risk of being falsified, and so is less probable.
@88tongued
@88tongued Ай бұрын
Same
@Pfhorrest
@Pfhorrest Жыл бұрын
On separation, complexity, and the relationship between elegance and parsimony generally, it seems to me like these two things ultimately boil down to the same thing in the end, because what we're ultimately doing is building mathematical models, abstract structures, and seeing how they compare to empirical observations of the actual concrete world, so as to figure out which possible world, or which abstract structure, that actual concrete world is. In building a mathematical model we stipulate some axioms that define the kinds of entities that model will feature and how they behave. Having fewer kinds of entities with behaviors governed by fewer rules that nevertheless can interact in such ways as to produce all of the phenomena that we empirically observe is both more parsimonious in the ontological sense, and more elegant in the semantic sense, provided that the obvious cheap trick of just conjoining different axioms together and saying "they're only one see" is forbidden: everything must be broken down to its most atomic form. Start with your foundations of mathematics, like say set theory (and even here there's potential room for optimization!), build our your numbers and tensors and spaces and fields in turn out of those sets or whatever, and then model observed empirical phenomena with as few and as simple of those as you can.
@littlebigphil
@littlebigphil Жыл бұрын
In computer science there's a concept called Kolmogorov complexity. Given an output and a programming language, the Kolmogorov complexity is the length of the smallest program that produces that output. Every combination of output and programming language has a defined Kolmogorov complexity (although we probably don't know what it is). By analogy, if we had a fixed language (some form of logic) to model our theories in, we could express elegance as the length of the smallest model of the theory in that language. This gives the intuitive results: two premises is equivalent in elegance to a conjunctive premise, and minimizing kinds is more important than minimizing tokens. There is a clear problem though. We don't have a fixed language.
@Pfhorrest
@Pfhorrest Жыл бұрын
On natural kinds, it seems to me that the problem of different fields dividing up types or kinds differently ultimately makes for an argument for reductionism on the grounds of parsimony: if many different chemical elements can be reduced to combinations of a smaller number of subatomic particles, and those subatomic particles can in turn be reduced to combinations or other variations of a common underlying physical unity (say strings, for a proposed example), and all supposedly non-physical things can be reduced to the physical, or perhaps if e.g. the physical and the mental can be reduced to some kind of neutral monism... the further reducible everything is, the more parsimonious the theory of everything can be, so if parsimony is a theoretical virtue, reductionism is too.
@Nicoder6884
@Nicoder6884 Жыл бұрын
Can you please do more Set Theory videos?
@Pfhorrest
@Pfhorrest Жыл бұрын
On type vs token parsimony, I find I encounter many people (even professional philosophers) who seem to think it is token parsimony that matters, which seems absurd to me. Usually this is in the context of something like Lewis's Modal Realism or Tegmark's Ultimate Ensemble, where it's held that the actual or concrete world is not different in type from merely possible worlds or other abstract structures, but that it's merely the indexical one of them: the actual world is just *this* possible world, the concrete world is just *this* abstract structure, and there's no such difference in type as actuality or concreteness at all. Critics object that this is supremely un-parsimonious in that it admits of infinities of new tokens of what they reckon to be the singular actual or concrete, while proponents such as myself argue instead that these are much more parsimonious solutions that eliminate an unnecessarily posited difference in type.
@88tongued
@88tongued Ай бұрын
The ex falso quodlibet had two problems. Maybe both problems are on my failure of properly understanding this. But the first is that EFQ doesn't have much to do with parsimony or elegance. If anything having a simpler theory with fewer implications reduces the chances of making a false entailment. This doesn't seem to undermine elegancy as a virtue. Second is that "as long as at least something was assumed without argument, there is no reason to believe the conclusions drawn from that assumption are true" has also little to do with either elegance/parsimony or EFQ. Sure, you can't deduce axioms, but you can still give arguments for them. If anything elegance might be taken as the reason. In some context you have reason, but in an ultimate sense, yes, you might have no good reason to believe anything or have any starting point to stand on, which isn't necessarily the discussion about elegance as a virtue in philosophy and it isn't EFQ related.
@mac2phin
@mac2phin Жыл бұрын
Planck length.
@InventiveHarvest
@InventiveHarvest Жыл бұрын
Science does not need to assume the laws of logic as it does not use the laws of logic. Science does need to assume the scientific method, but not because the scientific method cannot be justified. It's just that the scientific method cannot be justified by science. There is no experiment that could test the scientific method. To justify the scientific method is a burden of philosophy, as Popper attempted. So, science has to assume the scientific method because it cannot be justified within the system. For the same reason, science has to assume math for the same reason. There is no experiment to reveal 2+2=4. Now, math does assume the laws of logic, but science does not need to because math is an assumption in the science system. Assumptions do not need to be justified. Ceteris paribus, a more elegant solution is better than a less elegant solution and a more parsimonious solution is better than a less parsimonious solution. But, how would we weight value between the two? Certainly we would weight both less than predictive power and coherence. But because assumptions are as above described often useful conveniences so that we don't have to include other systems, parsimony should be weighted over elegance.
@littlebigphil
@littlebigphil Жыл бұрын
We should be careful with "assumes". The scientific method is an algorithm, so it is a way of acting. There's also the claim that scientific method is epistemic justification: that if one wants to believe what is true, then one should believe what is produced by the scientific method. To say that science "assumes" something means what exactly? That a proclivity to act according to the scientific method is necessarily rooted in a proclivity to act as if that assumption is true? That the scientific method's epistemic justification is necessarily rooted in this assumption also being epistemic justification? By the proclivity interpretations, if science "assumes" math, then it also "assumes" logic. By the justification interpretation, science "assumes" nothing. Logic (and math) can be scientifically tested. Assume it is true. Try to derive a contradiction. If you succeed then you have falsified it. The scientific method can even test itself. Create some set of facts to be discovered. Get two groups. Get one of the groups to follow the scientific method. Get the other group to follow whatever other method you want to test it against. Watch the groups try to discover that set of facts. If you find that scientific method was less effective than the other method, then you found evidence towards the falsification of the scientific method. One could argue that there is philosophical reason to root science's epistemic justification in logic's epistemic justification, but this doesn't mean that it's necessary to.
@InventiveHarvest
@InventiveHarvest Жыл бұрын
@@littlebigphil using the scientific method to test the scientific method would be circular reasoning.
@littlebigphil
@littlebigphil Жыл бұрын
​@@InventiveHarvest There's nothing obviously wrong with that. Is it better to unjustifiably assume a justification or to self-justify it? You'll have to ask the same question for logic anyway if you assume that logic is the foundation.
@dimkk605
@dimkk605 Жыл бұрын
​​@@littlebigphilwhy do we always need to test a method? We already know empirically that some things, phenomena, methods and theories etc are true without even testing them. Without even the need to come up with a way to test them in order to prove them true or false. This hypotheses-testing-mania is a West-science mania, that is too limiting in so many ways, in my opinion. How can you test the hypothesis that there is sun? How can you test the hypothesis (assumption) that sun has got mass and that its mass can be measured? You don't need to test it. You believe it a priori. We all believe there is sun and that sun has got mass. It's not a theorem. It's not an axiom. It's not a hypothesis. It's barely empirical data!! Yet, we tend to believe lots of things without the need to test them first. This principle above must be used more frequently. It sounds dangerous, but maybe can fix lots of problems. We just need to try to believe in random things and see how it goes....
@dimkk605
@dimkk605 Жыл бұрын
​​@@littlebigphilthe very scientific method itself, the one we use today, is self-limiting. Self-referential too. So, someone would argue that there is no point trying to prove something scientifically; if it's stuck in a self-referential loop, then all scientific proofs aren't proving anything. So, I guess we are back to empiricism or even...solipsism. Or maybe not.....
@TressaDanvers
@TressaDanvers Жыл бұрын
I feel like in this case, I disagree with your assessment of these two kinds of simplicity. I don't disagree that the strongest versions of these ideas is too strong, but I feel like the weaker ones have merit. Elegance: Tackling this one first, because imo it's easier to formulate what I mean. The problem of separation at first sounds reasonable. We can, in fact, split any statement up into many different parts, arbitrarily even, because of the way statements work. However, I don't think that's necessarily what people mean when they say splitting for elegance is important. Let's take that example you have: (∀x∈X, (x∈Y ∧ x∈Z)) This ofc can be split into (∀x∈X, x∈Y), and (∀x∈X, x∈Z). We split along the and to make each statement semantically smaller. These two statements, on their own, say less than the original statement. With this, we could then split further by the same justification. If we posit a subset W⊂X, then the two statements (∀x∈W, x∈Y) and (∀x∈(X∖W), x∈Y) each are semantically less complex than the original statement (∀x∈X, x∈Y), because they "say less things individually than the original". But this isn't the kind of split that I think I would make, and my justification isn't semantics, it's syntax. I think a split should only occur if the resulting two statements are both semantically and syntactically simpler. The syntax tree for the original statement is this: ∀ / \ ∈ ∧ / | / \ x X ∈ ∈ / | | \ x Y x Z Then, it is split into two statements which are both semantically simpler and syntactically simpler: ∀ / \ ∈ ∈ / | | \ x X x Y ∀ / \ ∈ ∈ / | | \ x X x Z But if we try that split with the subset, like you proposed, we end up with this: ∀ / \ ∈ ∈ / | | \ x X x Y Becomes: ∀ / \ ∈ ∈ / | | \ x W x Y ∀ / \ ∈ ∈ / | | \ x `∖` x Y | \ X W (`∖` being the set-minus operator, for some reason KZbin's font prints it to look more like the backslash character than it is supposed to.) You can see how, even if semantically these two trees are smaller because they each individually make claims about less things than the original, but the second one is more semantically complex. Now, this can become uneasy as well because this is the language I chose, and a different language could express the same concept in a simpler syntax, and that comes down to the individual power of each of those operators. I'd say that if you invented a way of combining some of those operators to make the syntax tree "smaller", then you've not actually made it less complex because you've increased the semantic complexity of the operators in question. It's like using less words and saying it's less complex but the words you chose were large jargon terms with lots of complexity behind them. So I'd say that if you find a simpler syntax to explain the same concept, then each individual node of the new tree should be compared with the old tree to see if it is actually the same complexity. How would you actually do that comparison? Idk lol, that's the actual problem with this I think; unlike something like a computer programming language the meaning of things in natural languages isn't always perfectly defined. Ambiguity and skepticism are core features of every single known natural language (no citation, can't remember where I saw that, quote at your own risk). I guess what I'm saying is that if you want to have this idea of elegance, you must first decide on a language or group of languages. Only allow those syntagma to count and not add more to the pile. One could pick any number of languages to include in this, and some are probably going to be "better" at it than others. Then, the actual minimal syntax trees, which also convey the least semantic meaning individually, could be a real measurable heuristic. Parsimony: Now, let's talk about kinds. Or really, since I'm cse at heart, let's talk about Types. In computer science there is this concept of a "type system" and most of it's ideas are taken from (or given back to) the mathematical framework called type theory or category theory. (different theories, both have ideas used in type systems in computer science.) First, let's talk about the difference between a concrete and abstract type. A concrete type is one of which an actual instance can exist, that is to say in your words that it is a kind which could have one or more tokens. An abstract type is one which cannot instantiate, that is to say that there are no tokens of this if it were called a kind. This does not mean that an instance of a concrete type cannot also be "called" by the name of an abstract type. For example: Animal, and Dog. Animal is an abstract type, and Dog is (in this example, I'll get back to this) a concrete type. Every Dog "is" an Animal. That is to say, every single instance of the Dog type can be "called" an Animal. But is there any example of an Animal, which is not an instance of anything except Animal? There are plenty of Animals which are not Dogs, possibly Cats, or Birds, but is there any instance of "Animal" which is purely Animal and not any other thing? No. This makes Animal an abstract type, it only exists as an idea. Now, is Dog actually a concrete type? Maybe, maybe not. That is essentially the question which underlies parsimony. Given two theories which predict the same thing, the theory which posits the fewest concrete types is preferable..... or at least it would be, but there is the problem of composition we have to tackle first. This problem is actually a lot like the problem which occurred with elegance, which was solved (assuming a fixed set of syntagma) by introducing the extra criteria of syntactic simplicity. The solution for this actually has been encountered by computer scientists, and it's called in that field the "principal of least responsibility". First though, let's formulate the problem better. Given two concrete types A and B, which are both used in a theory to explain something, you can compose those two types into a new type C which is the join of the two. For example: If we're assuming that all things are either protons, neutrons, or electrons (bad assumption but good example), then you *could* combine the Proton type with the Neutron type like so: (using kotlin-adjacent syntax for pseudocode.) type Proton(position, velocity, energy) // Ignoring Heisenberg, too complex for this example. type Neutron(position, velocity, energy) These two types could be combined into this new type: type Nucleon(position, velocity, energy, charge) The charge value can be used to distinguish between the two types, and that's enough to fully simulate either a proton or neutron (in a naïve way) in a single type. However, this conflicts with the principal of least responsibility. Now this type is responsible for accounting for charge, as well as the things it accounted for before. Sometimes, computer scientists decide that this change actually is a good thing, even if it violates the principal of least responsibility. However, this is (most of the time, sometimes it's just dumb) for the purpose of including extra features or other such reasons (which equivalent to increasing the explanatory power of a theory). NOW this isn't to say that there aren't any problems with parsimony, elegance, or even the solutions outlined here. I'm just saying that parsimony and elegance are at least *actually measurable and quantifiable* depending on how you define them, and if your careful with your definitions you can produce results along the lines of what people actually intend when they invoke Ockham's Razor.
@TressaDanvers
@TressaDanvers Жыл бұрын
The formatting on some of the trees is borked, youtube's comment editor cleraly isn't meant for that lol. Still legible I hope.
Bootstrapping Simplicity (A Priori Justifications for Ockham's Razor)
8:10
What is Philosophy?
1:08:13
onemorebrown
Рет қаралды 199 М.
Каха и дочка
00:28
К-Media
Рет қаралды 3,4 МЛН
Locke, Berkeley, & Empiricism: Crash Course Philosophy #6
9:52
CrashCourse
Рет қаралды 2,7 МЛН
Humanism vs Atheism (Philosophical Distinction)
7:54
Carneades.org
Рет қаралды 3,7 М.
Gottlob Frege - On Sense and Reference
34:06
Jeffrey Kaplan
Рет қаралды 336 М.
Continental Philosophy: What is it, and why is it a thing?
33:10
Overthink Podcast
Рет қаралды 75 М.
What is Oligarchy? (Political Philosophy)
18:24
Carneades.org
Рет қаралды 1,3 М.
Overview of Epistemology (part 1)
18:18
A Little Bit of Philosophy
Рет қаралды 24 М.
Killing, Letting Die, and Vigilantism
30:35
Carneades.org
Рет қаралды 4,4 М.
Exposing Scientific Dogmas - Banned TED Talk - Rupert Sheldrake
17:32
After Skool
Рет қаралды 2,4 МЛН
What is Secularism?
12:00
Carneades.org
Рет қаралды 1,6 М.