[These subjects are treated in Modes of the Finite, Part 1, Section 1, Chapter 5 and Part 4, Section 4, Chapter 2.
4.1. The kinds of certainty
I mentioned in Chapter 1 that there were various kinds of certainty, and said that we would discuss them.
Now that we have the basic principles of thought, that discussion can be somewhat more intelligent than otherwise.
First note that certainty is not opposed to probability, but to doubt.
It is true that, when we say that something "probably" will happen, we mean that we are not certain that it will happen. But when we speak of "probability," we are not using the word in this sense, exactly. "Probability" refers to the laws of probability (the "laws of chance"), and these laws are known with certainty.
We will discuss probability later. The point here is that the fact that something has a finite probability causes a doubt as to the occurrence of that something; and therefore there is a connection between probability and doubt. But the certainty is what is opposed to the doubt, not the probability.
Since certainty, like doubt, is a state of mind, then there are basically the two kinds of certainty we mentioned earlier.
Subjective certainty, you recall, is "pig-headedness." It is not real certainty, but pseudo-certainty: an emotional state of mind masquerading as knowledge. It is, as I said, "feeling confident" of being correct, and not "worrying" about being mistaken. But just as doubt is not an emotion, so the emotion of "conviction" is not any indication that in fact you are not mistaken.
So subjective certainty is not real certainty, because it lacks evidence; so we should ignore it.
Depending on the kind of evidence a person has, there are various levels of objective certainty.
DEFINITION: A person is ABSOLUTELY CERTAIN when his evidence establishes that it is impossible for him to be mistaken.
DEFINITION: A person is PHYSICALLY CERTAIN when he has evidence supporting what he thinks is true and NO evidence to think that it is false.
DEFINITION: A person is MORALLY CERTAIN when he merely has NO EVIDENCE that indicates that he might be mistaken.
These are all levels of certainty, because in fact the person does not think he is mistaken; but he has stronger or less strong reasons for thinking that he is not mistaken (and in every case, no reason for thinking that in fact he might be).
We already saw absolute certainty. In cases of absolute certainty, you can show that it would be a contradiction if the statement you think is true were to turn out to be false. In that case, you know you can't be wrong.
My definition of "physical certainty" is somewhat different from the traditional one. Traditionally, one is "physically certain" that a prediction based on the laws of nature will take place: for instance, that this sample of hydrogen I have will in fact combine with this sample of oxygen to form water. (The law itself is supposed, according to the tradition, to be absolutely certain.)
What was behind this traditional view is that God could "suspend" the laws of nature by a miracle if he wanted; and so it is possible that this sample might not in fact do what you expect. But you have no reason to think a miracle is going to happen in this case, and so you are certain of the outcome.
But that means, basically, that you are "physically" certain, according to the tradition, when, though theoretically you could be mistaken (because of the miracle), you know that in fact you aren't. The people of the Middle Ages were rather more confident than I (or almost any modern) that once you discovered a Law of Nature, it was impossible for you to be mistaken about it.
In any case, I chose to take the "theoretically you could be wrong but in practice you know you aren't" aspect of the medieval notion to update. In physical certainty, then, you have evidence to support what you think is true (so that you don't just have subjective certainty); and you have no evidence which would indicate that you are mistaken. In fact, then, you have no doubt; you know what is the case. Why would you doubt if (a) you had no reason to doubt and also (b) you had a reason for not doubting?
Now of course, there might be evidence that you are mistaken, and your evidence to support your knowledge might be faulty; so physical certainty admits the theoretical possibility that you could be wrong; but this does not establish any reason for thinking that you are wrong; and hence there is no doubt as to what the fact is.
So, for instance, we do not doubt that we are awake when we are awake, even though we realize that when we are sleeping, we sometimes dream that we are awake. So there is the theoretical possibility that you might now be dreaming that you are awake. But in fact, waking knowledge is a different sort of experience from a dream, and when you are awake, it is self-evident that you are not dreaming. So the theoretical possibility does not actually cause a doubt as to the fact that you are now awake.
When a defendant in a criminal trial has to be proved "guilty beyond a reasonable doubt," the certainty the jury is to have of his guilt is physical certainty. That is, there has to be (a) no evidence (no "reasonable" doubt) that he is innocent; but (b) more than that, there has to be positive evidence that he is guilty.
Moral certainty is the weakest of the three types. Again, you do not doubt what you think is the case; but here, your lack of doubt does not have anything in particular positive to support it; it is simply that you have no evidence that would indicate that you are wrong.
There might well be such evidence, and so once again you might be wrong; and it is easier for you to be mistaken in this case than in the case of physical certainty, because you have no evidence that would establish that you are not mistaken. Thus, you could have a doubt as to whether you were mistaken or not, but it would not be a reasonable doubt.
4.1.1. Certainty and evidence
But there is evidence and evidence, isn't there?
Suppose a defendant's brother gets on the stand and testifies that the defendant is a man of good moral character, and that he wouldn't have embezzled all that money. He has "given evidence," and so isn't that evidence that the defendant is innocent--and therefore, how could he be proven guilty "beyond a reasonable doubt"?
But no reasonable juror would accept this testimony as evidence, because (a) even if the defendant had the moral character his brother said he had, temptations can make even moral people sometimes act against their character, (b) the brother might not know all about his brother, and is simply basing his testimony on the part of his brother's life he knows, and (c) the brother might love his brother and so think he was nobler than in fact he was, or (d) he might lie to save his brother from a prison sentence.
Hence, what the brother said is not evidence for the juror, because it itself would not cause knowledge, as opposed to opinion. That is, there are reasons why this testimony would be given and still the defendant would be guilty; and hence, there is no contradiction between the testimony and guilt of the defendant.
Now it might be that this testimony, coupled with other testimony, might make a string of facts which taken together would in practice be impossible unless the defendant was innocent; in which case, the testimony is part of the evidence in his favor, even though in itself it is not evidence.
But if, for example, the Prosecutor could establish that what was in the books was in the defendant's handwriting and (from expert testimony) that no forging was involved, and that the particular entries would be impossible to perform mistakenly, then however great the indications of the person's moral character and so on, it would in practice be impossible for anyone else to have done the falsifying, and for him to have done it unwittingly.
Then there is evidence that proves him guilty; and since anything on the other side would still run up against a contradiction, there is no "reasonable doubt" that he did it.
Evidence, then, as we said, always involves there being some kind of contradiction involved if the fact for which there is evidence is not indeed a fact.
The reason why evidence does not always establish absolute certainty is that there is the possibility that additional facts could change the nature of the effect (the fact known with its evidence), and so change the evidence needed.
Even in the case we mentioned earlier of finding coins missing from your pocket, the evidence for your saying that they fell out is the fact that you found a hole in your pocket. It could have been the case, however, that before they had a chance to fall out the hole, your pocket had been picked.
You would have no doubt that the coins had fallen out the hole, provided that you didn't have any evidence to indicate that your pocket had been picked; but you would have been mistaken. Insofar as such a mistake is possible, your certainty is physical certainty and not absolute certainty.
Since this is the case,
Objective doubt always involves facts that would seem to indicate opposite conclusions.
That is, doubt does not come from a lack of evidence. When you don't have evidence, then you are morally certain, not in doubt. You would only doubt if you had reason to believe you were mistaken (i.e. some fact which could be evidence of the opposite).
Again, doubt is not "worry" about whether you are mistaken or not; that emotion is not a fact, but a mental condition. It has nothing to do with evidence.
It cannot be stressed too much that certainty is not "feeling convinced," and doubt is not "feeling unconvinced." Feelings have nothing to do with certainty and doubt (except the subjective kind, which is pseudo-certainty or pseudo-doubt); certainty and doubt are a question of the facts available to the person.
Another point to keep in mind is that it is possible to be objectively certain with physical or moral certainty and be mistaken.
The point here is that certainty is not to be equated only with absolute certainty; you can be objectively certain and still be wrong; but you have no reason to believe you are wrong--and this is certainty, not "probability," or "opinion" (except with moral certainty), still less doubt.
So a person has a doubt when he has facts in conflict. When he resolves the doubt (finds the cause), he becomes certain.
4.1.2. Opinions and certainty
Is a person who holds opinions ever objectively certain of them, and if so, at what level?
This can, I think, rather easily be answered. Remember, an opinion is something for which a person does not have sufficient evidence.
Obviously, then, a person can never be absolutely certain of an opinion. Absolute certainty is always knowledge.
Nor, really, can a person be physically certain; because, while it is theoretically possible to be mistaken with physical certainty, you have evidence that in fact you are not mistaken, and no evidence on the other side. So physical certainty also involves knowledge.
Notice that, as I said, with physical certainty, it can still turn out that new evidence comes to light and proves that you were mistaken. But this does not mean that you had an opinion and not knowledge; all it means is that knowledge is not always infallible. For instance, scientific theories (like Newton's Theory of Universal Gravitation) are knowledge. It turns out that Newton's theory is false; there is no force of gravity as Newton described it. But those who held the theory were physically certain of it, and they had knowledge, not opinion; because at the time they held it, there was no evidence against it. It was only at the beginning of this century that evidence came along to prove that the theory could not be true.
But since moral certainty simply involves a lack of evidence to the contrary, a person can be morally certain of an opinion. (Of course, a person who holds an opinion can be subjectively certain no matter how much evidence there might be against his opinion. You can be subjectively certain of anything. But, as I stressed, subjective certainty is not really certainty.)
But it is also the case that a person can hold an opinion and not be certain at all. Very often we do have facts that indicate that we might be wrong; but the weight of the evidence tends in the direction of the opinion we hold. In this case, we can't be certain that we are right, but there are more facts on our side, and no fact that would make it impossible that we are right.
Here, depending on how strong the facts are on our side and how weak the case is on the other side, it becomes increasingly unreasonable not to hold the opinion as "tentatively true," recognizing that one is not certain of it, but that, absent new evidence, it is more reasonable to hold it to be true than hold it to be false.
Notice, then, that it is not always the case that "there are two sides to every story," meaning that there is always evidence to the contrary, no matter what you think is true.
This is another of those relativistic absolutes. If there are always two sides to every story, then the statement "there are two sides to every story" has "another side" to it, proving that there is evidence against it. So if it's true, it's false.
Remember the secret at the beginning of the first chapter. Don't be led down the garden path of doubt by silly generalizations like "there are two sides to every story." There are not "two sides to the story" of the fact that there is something, for instance, or that what is true is not false in the sense in which it is true.
Nevertheless, it is many, many times the case that the best we can get is a well-informed opinion. There's nothing wrong with opinions when knowledge is not available, and the evidence is not conclusive. It may be even that most of what we "know" is actually opinion with more evidence for it than against it. But this is not always the case; sometimes we can reach knowledge and certainty beyond mere moral certainty.
Now then, just what is probability, and why are there "laws" of probability, and so on?
What is a "law" anyway?
DEFINITION: A LAW OF NATURE is a constant way some object behaves, so that its future behavior is predictable.
The effect connected with the laws of probability is that probability deals with what is random, and laws are statements of non-randomness.
That is, "chance" or probability has to do precisely with those events which are not constant, but vary randomly. When you flip a coin or throw a pair of dice, the idea is that there is no connection between what happens on the first throw and the second. If the dice are "loaded" or you flip the coin skillfully so that it does three and a half turnovers every time, then the laws of probability are thrown off; because they suppose that there is no system in the throwing or flipping.
But then how is it that you can make predictions? Can randomness be constant? This seems to be a contradiction; and given that the laws of probability work, we have evidence that the "contradiction" actually occurs, and so we have an effect.
But notice that with the coin, the laws say that heads will come up one-half the time; and with the pair of dice, twelve will come up one-thirty-sixth of the time; and with one die, any given face will come up one-sixth of the time. How do you know? Because there are six faces on the die, and twelve on the two of them; and there are two sides to the coin.
That seems to indicate that the predictability doesn't deal with the randomness itself, but with the fact that the dice and the coin have a constant feature in all the throws.
We can test this by making a "die" of soft clay, putting a spot on one side, and rolling it in such a way that as it bounces and rolls on the table, it gets flattened, and so has a number of "faces" that varies at random with each throw. On the first thrown, for instance, it becomes a cylinder (with 3 "faces"), on the second, we count seven, on the third it is a perfect sphere (and has either one or an infinity, depending on your point of view), and so on. What now is the probability that the spot will appear on the topmost "face"?
You can't put a number on it; which indicates that the laws of probability are destroyed when everything becomes random.
The laws of probability state that when something that operates randomly has a constant structure underlying the operations, the constant structure will show up through the random operations.
Thus, the laws predict that with a coin, which has (for practical purposes) two sides, the ratio between the number of throws and the number of times heads appears on top will not diverge systematically from two to one. With a die, the ratio between the number of throws and the number of times a given face appears on top is six to one, because the die always has six sides; and so on.
That is the technical meaning of "heads will come up half the time 'in the long run'"; or that "the one-spot will be on top a sixth of the time 'in the long run.'" The "long run" here means that there won't be any systematic deviation from this number (though there may--and in fact will--be plenty of unsystematic ones); and since the divergences are unsystematic, they will tend to cancel each other out--but again not in a systematic way.
Thus, in flipping a coin, you may get fifteen heads in a row; but as you keep flipping it, you begin to get tails more than heads--perhaps a run of two or three tails to one heads, perhaps five tails to three heads, and so on, so that as the number of flips becomes very large, the "runs" in one direction tend to balance those in the other, and the ratio converges on one-half (meaning it gets closer and closer to it the higher the number of flips).
Notice that this is a prediction of what will actually happen in the real world, and is not merely a mathematical game. All that the mathematics says is that, for instance, there are six sides to the die, so that at any given throw there is one chance in six that the one-spot will be on top; and this goes for any throw (given non-loaded dice); and so there is no reason for expecting the one-spot on top any more than one-sixth of the time.
But that in itself doesn't establish a reason why the one-spot should appear on top one-sixth of the time rather than one-third (or one-twelfth)--unless it is the case that random operations with constant underlying structures reveal those constant underlying structures.
That is, if you said, "The one-spot will appear on top one-third of the time," you have no reason at all for this prediction. And the same is true for any other ratio except one-sixth. But then you only have the "reason" that after all there are six sides to the die.
But why couldn't the operations of the die be totally random, like the operation of the die we made of clay, so that nothing at all would be predictable, even in "the long run"? There's no reason why this couldn't be the case.
But in fact it isn't; and therefore
The laws of probability actually express a law of nature: that random operations of something constant reveal the constant underlying structure.
4.2.1. The "law of averages"
A footnote is in order here on the "Law of Averages."
The assumption in the ordinary person's mind when he sees a "run" of some divergence from the prediction of probability is that the coin or the dice will "even themselves out," and therefore he formulates the fallacious "law of averages," which goes something like this, as applied to flips of a coin. "If there have been twenty heads in a row, the probability of the next flip's being tails will be better than one-half."
This sort of "stands to reason." You can predict from the laws of probability that the probability of getting twenty heads in a row is very small, and the probability of getting twenty-one heads in a row is even smaller. Hence it would seem that when you bet on the twenty-first flip, you would be a fool to bet on heads; it's almost certainly going to be tails.
But that isn't so. The coin doesn't know that it's come up heads twenty times in a row; and the probability given twenty heads in a row that heads will come up the twenty-first time is--one-half. It's one-half for any flip. The probability of twenty-one heads in a row is very small; but most of the improbability, so to speak, was used up in those twenty flips; and so the improbability left for the twenty-first one is just one-half.
And as a matter of fact, that's what actually happens with real coins, as many a man can attest to his sorrow. The fact that things like this obey the laws of probability and not the "law of averages" and that gamblers believe in the "law of averages" is, among other things, what keeps Las Vegas making a profit.
There's no reason why things couldn't follow the "law of averages"; but they don't, and so don't bet on it.
The reason I say this is that, though mathematicians tend to say the law of averages couldn't work because of the mathematics of probability, they don't see the "hidden parameter" that connects the logic of the mathematics with the operations of physical objects--which in fact "obey" the logic of the mathematics, but wouldn't necessarily have to.
Now statistics are just probability worked backwards.
This means that some statistics are valid and tell us something, and others are just nonsense. Can we distinguish, based on probability, when we should listen to statistical correlations and when we shouldn't?
Probability-like ratios showing up in what seem to be random events can be due to a constant structure underlying those events.
That is, suppose you find the ratio between the number of highway accidents and the number of drivers. Then you notice that the ratio of accidents involving teenage drivers to the number of teenage drivers is significantly higher.
There are two possibilities here. Either this is just a chance correlation (like a run of heads in flipping a coin, or better, having the spot on our clay die come up on top two-thirds of the time in a given set of rolls); or it is an actual probability ratio, and therefore there is something about teenage drivers that makes them more prone to accidents.
You then investigate to see if there is something about being a teenager that would allow you to predict that teenagers should have more accidents than married middle-agers. And the answer is that there are several things. Teenagers have been protected from the consequences of their actions, and so have not as great a concrete realization that even with the best of intentions, horrible things can happen. They tend to be over-confident of their reflexes. They don't have dependents, so that they have to be careful for others' sakes. And so on. All of these are reasons why teenagers would be less likely to be careful than middle-aged people. Hence, they should have more accidents.
When you put these two together, you find that that underlying "recklessness" of teen age shows up in such-and-such a greater ratio of accidents per driver.
Hence, statistics, when valid, reveal something of the nature of what behaves in other respects randomly.
You can't predict how likely it is that any given teenager will have an accident; but you can predict within a certain margin of error how many accidents in a given year will be due to teenagers.
But when the ratio can't be found to have anything "underneath" it which would make it predictable that there ought to be some ratio, then the statistics are probably just a chance correlation.
Thus, there may be a high ratio between the number of houses with green window-shades and the number of murders that occur in such houses as opposed to houses with tan window-shades. But there is nothing in the color of the window-shades which would lead a person to predict that the color would lead to killing people.
The tobacco companies are claiming that this is what is the case with smoking and lung cancer and heart disease; that this is just a chance correlation. Unfortunately, nicotine can be shown to make your heart do funny things, and "tar" damages animal tissue in laboratory tests; and so taking that stuff into your lungs or mouth would be likely to do you some harm--and therefore, the statistics are valid. Smoking is a cause of lung cancer and heart disease and the rest of it; the smoking explains why there is a higher ratio of these diseases among smokers than among the general population.
Some philosophers claim that the logical operation called "induction" is based on probability.
In one sense they're right and in another they're not.
DEFINITION: INDUCTION is the leap from knowing that a fact is true of certain instances of an object to knowing that it is true for all instances of that object.
The effect here is that induction seems to violate a cardinal rule of logic (which we will see later); that you can't move from "some" to "all." If some people like baseball, it doesn't follow that everyone does.
But induction, on the other hand, works. It is how we get the laws of nature. We observe some cases of hydrogen combining with oxygen to form water, and we conclude that this is always what you get with these two chemicals (under the proper conditions--we want to admit the possibility of hydrogen peroxide, and so on; but let's not complicate things unnecessarily. You see the point.)
Some people, like David Hume (the one who didn't like causality), say that the only thing you know in cases like this is "The hydrogen I have tested combines with oxygen to form water," and you say that the next instance will do this just because you got into the habit of expecting it.
But this makes hydrogen like observing baseball fans and concluding that all human beings are baseball fans "just because you got into the habit of expecting it." Besides, if Hume says this is what accounts for all our instances of making inductive generalizations, hasn't he made an induction, which according to him is invalid? Doesn't he have to say, "The instances I've tested worked out to be due to habit, but I couldn't say whether this will be the case in the future."? So he really should have shut up and not published his "findings."
In fact, it's silly to say that we don't know whether hydrogen will combine with oxygen to form water. In fact, if a scientist takes something from a bottle labeled "hydrogen" and combines it with something labeled "oxygen," and what he gets is a gold powder, he will say, "Who switched the labels?" before he will say, "Oh, there are some instances of hydrogen that combine with oxygen to make iron pyrite."
Some have said that what we do is see a few instances of the combination happening and then define "hydrogen" to be "what combines with this other stuff to form water." And of course the word "hydrogen" is Greek for "water-generator."
The trouble with that explanation is that it would work for what hydrogen did with oxygen, but how could you know that hydrogen also has a certain spectrum if you burn it. You've already "defined" it in terms of its operation, and it doesn't follow from this definition that every instance of what combines with oxygen to form water will also have this particular spectrum when burned. For that, you need to make an induction, not an arbitrary definition.
What seems to explain induction is a kind of version of what we said dealing with statistics.
We first see some instances of something operating in a constant way (not in a random way, now). We observe enough cases of this to assure ourselves that this is because of some "underlying structure."
We examine the thing to see if there is a structure which would make the operation in question predictable. If there is, then we conclude,
"Because the thing has this structure, it behaves in this way; therefore anything with this structure will behave in this way."
Hence, we see that because hydrogen has one electron and oxygen lacks two in its outer "shell," then you could predict that to atoms of hydrogen would combine with one atom of oxygen; and you would get some compound. What you get is water; and so you can say that all instances of hydrogen (what has this structure) combine with oxygen to form water. Similarly, what has one electron could have a certain number of excited states, which would give it a certain spectrum. Therefore, all cases of hydrogen have this spectrum. Voila.
DEFINITION: The NATURE of something is its constant structure which reveals itself in its operations.
Thus, it is "the nature of hydrogen" to have a certain spectrum and to combine with oxygen to form water and with chlorine to form hydrochloric acid, and so on. It is "the nature" of teenagers to be reckless and have more auto accidents than adults. It is "the nature" of things that operate randomly to have their constant underlying structure show up through the operations.
Does this mean that it is probable that hydrogen combines with oxygen to form water? No, because probability deals with random operations, not constant ones; and this behavior of hydrogen is constant. It is probable that the one-spot will appear on the top of a die on some throw, because the throw is random.
So those philosophers who say that induction gives a person a "probability" that something will happen have not understood what probability really is. In fact, the laws of probability themselves, as I tried to show, are laws of nature, and the result of an induction.
Then are we certain of the results of induction? Yes, with physical certainty. We have evidence that hydrogen, just because it is what it is, behaves as it behaves; and so all cases of it will behave this way.
Can we be wrong? Yes, in two ways.
First of all, we may have missed some evidence, and so made a faulty induction. The induction, based on chemistry, that you can't turn lead into gold, turns out not to be true now that we know that you can fool around with the nucleus of the atom, adding protons and neutrons.
Secondly, there can be defective cases of the thing in question. "All human beings can see" is a valid induction; but some human beings have detached retinas in their eyes and so can't see. But it is still "of the nature" of even these human beings to see, as can be shown by the fact that their retinas can be reattached and then they can see.
So inductive generalizations remain true even in the face of instances to the contrary; because the induction says, "the structure is such that it results in this behavior" and if the structure is complex (as it always is, even in the atom), then the structure can be "almost such" but not quite--which results in a defective instance.
Hence, we are certain of the results of induction; but our certainty is not absolute; it is physical certainty.
Summary of Chapter 4
Certainty is the opposite of doubt, not probability. Subjective certainty, emotional conviction, is not real certainty, and can be ignored. A person is absolutely certain when his evidence establishes that it is impossible for him to be mistaken; physically certain when he has evidence supporting what he thinks is true and no evidence opposing it; and morally certain merely if there is no evidence indicating that he is mistaken.
You can be physically or morally certain and be mistaken; the point is that you have no reason to think you are mistaken, so you are certain.
Not all evidence establishes the impossibility of being mistaken, because not all evidence excludes the possibility of further evidence; but it is always some fact which itself would be impossible if the fact it is evidence for were not a fact. When you have evidence, then, for something and no evidence against it, you are certain.
Objective doubt always involves facts that would seem to indicate opposite conclusions (evidence on both sides).
A person can be certain of an opinion with moral certainty, but not with physical or absolute certainty (in the latter cases, he has knowledge, not opinion). Many times all we can have is well-informed opinions (where the weight of the evidence favors one side, but there is evidence on both); but sometimes we can have knowledge.
Probability involves laws of nature, which are constant ways in which objects behave, so that their future behavior is predictable. Probability, however, deals with the random, and so it seems it cannot have laws governing it.
But objects operate according to the laws of probability when not everything about them is random; the law states that when something that operates randomly has a constant structure underlying the operations, this structure will make the operations not totally random. There will be no systematic divergence from a predictable mathematical ratio dealing with the operations. This is verified in the actual operations of such objects, and so it is a law of nature.
But the "Law of Averages" (which says that deviations from probability make prediction of the next event different from the probability ratio--things "even themselves out") does not work. The reason is that the unlikelihood of the events preceding uses up most of the unlikelihood of the next event continuing the "string," and so the next event is just as probable as the one preceding it.
Statistics are probability backwards. If events seem to be exhibiting a probability-like ratio, then this might be due to some underlying structure. If you can find some structure which would make it reasonable to predict the event in question, then the statistics are valid; if not, the correlation is as likely as not just coincidence. The nature of something is the constant structure it has which reveals itself in its operations.
Induction is the leap from knowing that a fact is true of certain instances of an object to knowing that it is true for all instances.
Induction seems to violate a rule of logic that you can't move from some instances to all instances; but it is still valid. How?
First, it does really move from events seen to events not seen, because it is silly to say we can't know that hydrogen will combine with oxygen to form water except in the cases we have observed.
Secondly, it is not based on simply defining the object as "whatever does X" because by induction we discover that all cases of "What can do X" can also do Y; which could not be got at by definition, but must be due to both properties' actually being in all cases of both objects.
We make inductions by observing enough cases of constant operations to convince ourselves that there is a constancy in the object's structure; when we find what it is about the structure, we then conclude that all cases of this object will do X (because all cases have the structure which causes X). We have found the nature. This is not probability, but certainty, because it deals with what is constant, not random.
Inductions can be mistaken if we have missed some evidence which would falsify our generalization, or because there are defective cases of objects which have almost all of the structure, but lack some crucial part dealing with the operation.