25/07/2024

MyEngine

Automotive toughness

Inductive Reasoning: Generating Knowledge

17 min read
Inductive Reasoning: Generating Knowledge

Chemistry Lab.

Many never took the course (possibly to their relief). But for those that did, some enjoyed it, others dreaded it. Some delighted in their dexterity at titration (yes, some did, and we should be glad since with their lab skill they may find a new drug or create a breakthrough chemical), while others pressed their lab partners into performing that task.

Few, I recollect, enjoyed writing the obligatory post-experiment lab report.

Whether a source of enjoyment or not, chemistry lab exemplifies our topic here, inductive reasoning. In a lab, participants record observations and collect data and, in combination with data and findings from prior experiments, generate new conclusions. That illustrates the essence of inductive reasoning, i.e. using present and past data and knowledge to go forward to reach new conclusions.

So in our chemistry lab, we might test the acidity of rain water from different locations, and draw conclusions about the impact of pollution sources on pH. We might sample grocery store beef, and make conclusions about the accuracy of the fat content labeling. We might analyze lawn fertilizer, and generate theories about how its components are blended together.

These examples illustrate inductive reasoning, going from information to conclusion.

Note however a subtle, but critical, feature of inductive reasoning – the conclusions are not guaranteed to be true. Our conclusions may prove useful and productive and even life-saving, but however beneficial our findings, inductive reasoning does not contain sufficient rigor or structure for those conclusions to be guaranteed true.

Deductive vs. Inductive Reasoning

So inductive reasoning doesn’t guarantee true conclusions. That is interesting – and possibly unsettling. Inductive reasoning underlies our prediction that the Earth will rotate to create a tomorrow, and we would like to think tomorrow is a certainty.

So let’s explore this particular issue of certainty of conclusion, and inductive logic in general, and do so through a contrast with another major type of reasoning, i.e. deductive.

Now, one often cited contrast between the two highlights general vs. specific. In particular, deductive reasoning is said to proceed from the general to the specific, while inductive reasoning as proceeding in the opposite direction, from the specific to the general.

That contrasting does give insight, and can prove true in cases, many cases. But not always. For example, in geometry, we use deductive logic to show that the angles of all triangles (in a Euclidean space) sum to 180 degrees, and we similarly use deductive logic to show that for all right triangles (again in a Euclidean space) the sum of the squares of the two shorter sides equals the square of the longer side.

For inductive logic, we might observe our pet, and notice that certain foods are preferred over others, and thus generalize as to what foods to buy or not buy for our pet. We make no claims or conclusions about the pets of others.

Thus, we used deductive logic to prove a general statement, and inductive logic to make a conclusion about one specific pet. The general and specific descriptions don’t quite provide a correct delineation of deductive and inductive logic. We need a more rigorous characterization.

Deductive logic, more rigorously, involves use of reasoning structures where the truth of the premises logically generates the truth of the conclusion. In deductive reasoning, the construction of the proof logic and the syntactic arrangement of the piece parts assure that true premises create true conclusions.

Why is that? In its most extreme representation, deductive logic floats out in a symbolic ether, consisting of just variables, and statements, and logic operators. So in extreme, deductive logic isn’t about anything, rather it is a system of proof. Now in everyday life we insert real-life objects. For example, we might construct a deductive proof as follows:

  • Samantha is a person
  • A person is mortal
  • Samantha must be mortal

This involves real-life objects, but that is just a happenstance. We could have very well written if “Xylotic” is a “wombicome”, and “wombicomes” are “kubacjs” then “Xylotic” is a “kubacj”. The structure of these sentences and the meaning of the connective words like “is” entails that the conclusion is true if the two premises are true.

Back to Inductive Logic

While in deductive reasoning the logical and syntactic structure inherently plays a central role, for inductive reasoning, such structures are less central. Rather, experience stands front and center, and in particular our ability to discern patterns and similarities in that experience, from which we extrapolate conclusions.

Let’s think about our example of our pet and what food to feed it. In working towards an answer, we didn’t approach the problem as if in geometry class – we didn’t start constructing logical proof sequences. Rather, we focused on collecting information. We tried different foods and different brands, and took notes (maybe just mental, maybe written down) on how our pet reacted. We then sifted through our notes for patterns and trends, and discovered, for example, that dry foods served with milk on the side proved the best.

At a more general level, we can picture scientists, and designers, and craftsman, and just plan everyday individuals, doing the same. We can picture them performing trials, conducting experiments, collecting information, consulting experts and using their knowledge of their field, to answer a question, or design a product, or develop a process, or just figure out how to do something the best way.

Why does this work? It works because our world exhibits consistency and causality. We live in a universe which follows rules and displays patterns and runs in cycles. We can conceive in our minds a world not like that, a universe in which the laws of nature change every day. What a mess that would be. Everyday would be a new challenge, or more likely a new nightmare just to survive.

Inductive reasoning thus involves our taking information and teasing out conclusions, and such reasoning works due to the regularity of our universe.

But why doesn’t this guarantee a true conclusion? What’s wrong here?

Nothing in a practical sense. Rather, the issue is one of formal logical structure.

Specifically, what assumption lies behind inductive conclusions? What do we presuppose will be true? Think about it. Inductive logic presumes past patterns will predict future patterns, that what we observe now tells us what will be the case in the future.

But that assumption, that presupposition, itself represents an inductive conclusion. We assume past patterns will predict future patterns in a given case because our experience and observations, both formally and in every day life, have led us to a meta-conclusion that in general what we observe and know now provides a guide to what we have yet to observe and know.

So we have made a meta-conclusion that our world acts consistently. And that meta-conclusion isn’t a bad thing. Mankind has used it to make amazing discoveries and enormous progress.

But in the world of logic, we have created a circular argument. We have attempted to prove the logical soundness of inductive reasoning using a conclusion based on inductive reasoning. Such a proof approach fails logically. Philosophers and individuals who study logic have dissected this issue in depth, attempting to build a logically sound argument on the truth value of induction. Such an argument may exist, or may not, or some think they might have found one, but more importantly the issue focuses on the truth value in the formal logic sense.

The presence or absence of a formal proof about the truth value of inductive logic does not undermine induction’s usefulness. Your pet doesn’t mind. It is just glad you figured out what food it likes.

Bases for Forward Extrapolation

So while not formally providing truth, inductive logic provides practical conclusions. If the conclusions don’t stem from a formal logic, how do we reach inductive conclusions? Let’s start with an example:

When someone shakes a can of soda, the soda almost always gushes out when the can is opened.

How did we (and many others) reach that conclusion?

First, we extrapolated that shaking a can will cause the soda to gush out based on observed patterns. We have observed a good number of shaken cans, and almost always shaken cans gush out soda when opened. This repeating pattern, present regardless of the brand of soda, but almost always present when the soda is carbonated, gives us confidence to predict future occurrences.

We can also reason by analogy. Even without ever having observed the opening of a shaken can of soda, we may have seen the opening of shaken bottles of soda. From our experience and learning, we have an intuitive sense of when one situation provides insight into similar situations. We don’t expect two people similar in that they are from the same city to like the same ice cream. But we sense intuitively that a shaken can of soda might be similar to a shaken bottle of soda, and thus conclude that both would exhibit the same outcome when opened, i.e. the soda gushing out.

Finally, we based our conclusion on causality. We understand the linkages present in the world. So we know that soda is carbonated, and that shaking the can releases the carbonation, increasing the pressure in the can. Thus, even if we never previously experienced an opening of a shaken can or bottle of soda, we can step through the causal linkages to predict the outcome.

Some subtle reasoning steps exist here. For example, in using analogy, we first extended our base conclusion, on shaken bottles, outward. Our observations of shaken bottles generated a conclusion that shaken bottles of carbonate liquids gush outward when opened. When we thought about what would happen with a shaken can of soda, we re-examined our observations on bottles, and upgraded our conclusion to state that shaken sealed containers of carbonated liquids will gush outward when opened.

In using causality, we brought in a myriad of prior conclusions. These included that agitation liberates dissolved carbon dioxide from liquids, that the added carbon dioxide gas will increase the pressure in a sealed container, that materials flow from high to low pressure, and that significant carbonation exists in soda. We then used some deductive logic (note the interplay of induction and deduction here) to reason if all of these are true, shaking a can of carbonated soda will cause the liquid to gush outward when we open the can.

Interplay of Inductive and Deductive Logic

We should say a few more words about the interplay of inductive and deductive reasoning. In our chemistry class, once we use inductive reasoning to formulate a conclusion (or let’s use a more precise terminology, i.e. formulate a hypothesis), we often use deductive reasoning to test the hypothesis. We might have tested samples of meat labeled “low” fat from five grocery chains, and found that samples from one grocery chain measured higher in fat than the samples from the other four chains. Our hypothesis then might state that this one grocery chain defines meat as “low” fat at a higher (and maybe deceptively higher) percent fat than the other chains. We then deduce that if the definition causes the labeling result, added samples of “low” fat should have a relatively high percent fat, and further that samples not labeled “low” should have a higher fat content still.

Let’s say however, that added testing doesn’t show these outcomes. We find with our wider added sample no relation between the labeling and the actual percent fat. The labeling appears as random as flipping a coin. We thus take the added data, discard our original theory and hypothesize that the grocery chain’s measurement system or labeling process might have issues.

Note here how induction lead to a hypothesis, from which we deduced a method to test the hypothesis, and then the data we collected to confirm or deny our deduction lead to a revision in our (inductive) hypothesis.

This again speaks to the logical truth value of induction. We form a hypothesis A, which implies we should see result B in our data. If we don’t see result B, we can assuredly conclude “A” lacks validity, at least in some part. Why? If A requires B, then the occurrence of Not B implies Not A. However, if we do see results B, we have an indication A might be true, but caution is needed. If A requires B, the occurrence of B does not imply A. (If it just rained, the grass will be wet. But the grass being wet doesn’t assure that it rained – we could have just run the sprinkler.)

Faulty Induction

The world exhibits regularity, and through inductive reasoning we informally and formally tease out findings and conclusions that (attempt to, but with good practical success) capture that regularity.

But we can be fooled. We can, and do, reach incorrect conclusions.

Stereotyping represents a major type of faulty induction. Let’s say we see a few instances in which young males are caught speeding. We then take notice of future such instances, preferentially, i.e. the first few instances trigger a tentative hypothesis, and that makes us more aware of examples that fit the hypothesis. Soon we begin believing all young male drivers speed.

However, we have almost certainly over reached. In making our conclusion we didn’t have any widely collected, statistically valid demographics of whether all young male drivers speed, or even if significant percentages do. Rather, we used selectively collected anecdotal information, making our conclusion too sweeping compared to our basis for making it.

Correlation without causality also leads to faulty induction. Let’s say we do have good demographic information and unbiased sample data. That data shows that A and B occur together at a statistically significant level. So A might be asthma in young children, and B might be lung cancer in a parent. We conclude a genetic linkage might be present.

However, we missed factor C, whether or not the parent smokes. A more in-depth look at the data reveals that factor C is the cause of A and B, and that when we control the analysis for such common causative factors (smoking, air pollution, workplace asbestos brought home via clothes, etc.) that we can not statistically show that A and B are related.

In formal studies, such as on health effects, researchers have available and do employ sophisticated techniques to weed out such false causality. But in our everyday common sense, we may not do so as readily. We may conclude certain foods, or certain activities, lead to illness or discomfort, but fail to notice we eat those foods or do those activities in certain places. The locations could be the cause, or alternatively, we could blame the locations when the foods or activity could be the cause.

Insufficient sampling scope can generate errors, or more likely limit the scope of conclusions. As telescopes and satellites extend our reach into the universe, and reveal finer details of planets and moons, astronomers have become amazed at the diversity of celestial objects. In part, this amazement stems from having only our solar system available for study. It was the only sample available. And though astronomers have and had the laws of physics to extrapolate beyond our solar system, exactly what extensions of those laws actually exist in the form of planets and moons remained a calculation, until recently.

Similarly, we have only life on Earth as a basis for extrapolating what life might, or might not, exist on other planets and moons. Astrobiologists possess much science from which to extrapolate, just as do astronomers relative to planets and moons. But having a sample of one for types of life certainly limits the certainty with which the astrobiologists’ can make predictions.

Other similar examples of limited sampling scope exist. We have only one Universe to sample when pondering fundamental constants of physics. We have only the present and past when extrapolating what future technologies, and societies, and social advancement, may occur. We have only our experience as spatially limited, finite, temporal beings upon which to draw conclusions about the ultimate nature of the spiritual.

Thus, while “insufficient sampling scope” may trigger images of researchers failing to sample wide enough, or our own behavior of drawing quick conclusions (e.g. say condemning a restaurant based on one meal), “insufficient sampling scope” also relates to big picture items. Some of these big picture items may have little immediate impact (the diversity of planets, at least for the near future, does not relate to paying our bills, or whether our team will make the playoffs), but the nature of the spiritual likely does mean something to a good many. And no doubt we have limited data and experience upon which to truly comprehend what, if anything, exists in the spiritual realm.

An Example of Faulty Induction: Motion of the Planets

Two great titans of astronomy, Ptolemy and Newton, fell victim, ultimately, to faulty induction. This provides a cautious to us, since if these stellar minds can err, so can we.

Ptolemy resided in Rome about a century after the start of the Christian era. He synthesized, summarized and extended the then current data and theories on the motion of planets. His model was geocentric, i.e. the Earth stood at the center of the solar system.

Why place the Earth at the center? Astronomers held a variety of reasons – we will cite one. At the time of Ptolemy, astronomers concluded the Earth couldn’t be moving. After all what would move the Earth? Our planet was enormous. All experience showed that moving an enormous object required enormous continuous effort. Lacking an indication of any ongoing effort or effect that would move the Earth, astronomers concluded the Earth stood still.

The error, an error in inductive logic, centered on extending experience with moving Earth-bound objects, out to planetary objects. On Earth, essentially everything stops if not continually pushed (even on ice, or even if round). Friction causes that. Planets in orbit, however, don’t experience friction, at least not significant friction. Thus, while just about every person, every day, with just about every object, would conclude moving an object requires continual force, that pattern does not extend into a frictionless environment.

Newton broke through all assumptions before him (like that the Earth wouldn’t move in the absence of continuous force) to formulate a short set of concise, powerful laws of motion. Much fell into place. The elliptical orbits of planets, the impact of friction, the acceleration of falling objects, the presence of tides, and other observations, now flowed from his laws.

But a small glitch existed. The orbit of Mercury didn’t fit. That small glitch became one of the first demonstrations of a set of theories the superseded Newton’s laws, the theories of relativity. Relativity, boldly stated, holds that gravity does not exist as we imagine. Rather, objects don’t necessary attract, rather mass and energy curve space-time, and objects following the resulting geodesics in curved space-time.

Why hadn’t Newton conceived of anything like relativity? In Newton’s time, scientists viewed time and space as absolutes, immutable, unchanging, and further that the universe was fundamentally a grid of straight lines. That view fit all the observations and evidence. Clocks counted the same time, distances measured the same everywhere, straight lines ran in parallel. Every scientific experiment, and the common experience of everyday life, produced a conclusion that time acted as a constant and consistent metronome, and that space provided a universal, fixed lattice extending in all directions.

But Newton erred, actually just about everyone erred.

Einstein postulated that time and space were not fixed. Rather, the speed of light stood as absolute and invariant, and time and space adjusted themselves so that different observers measured light at the same speed. Further, given a view that time and space were not fixed, he theorized that gravity was not necessarily an attraction, but a bending of space-time by mass and energy.

Newton and his peers erred by extrapolating observations at sub-light speeds, and solar system distances, to the grand scale of the universe. We can’t blame them. Today particle accelerators automatically encounter relativity. As these accelerators speed up particles, the masses of the accelerated particles increase exponentially as particle speeds approach the speed of light. Relativity predicts that, Newton’s laws do not. But particle accelerators, and similar modern instrumentation, didn’t exist in Newton’s time, so those in Newton’s era didn’t have that phenomenon available for consideration. And the glitch in the orbit of Mercury did not pose a wrinkle sufficiently large to trigger the thought process that inspired relativity.

Did Ptolemy and Newton have it wrong? Wrong would characterize their thinking too stringently. Their conclusions were limited. Ptolemy’s Earth centered theory reasonably predicted the future location of planets, but would fail in the design of a satellite trajectory to Mars. Newton’s laws work on that satellite trajectory, but wouldn’t help in understanding the very subtle impact of gravity on GPS satellite timing.

Inductive Reasoning: The Foundation of Technology

The culture of humankind now rests on our technology. We can not go backwards to a simpler time; the size of our human population and our expectations and routines of daily life depend on the extensive and comprehensive array of technology with which we have surrounded ourselves.

While technology has not been an unblemished development, most would agree it has brought much improvement. The simpler past, while possibly nostalgic, in reality entailed many miseries and threats: diseases that couldn’t be cured, sanitation that was substandard, less than dependable food supplies, marginally adequate shelter, hard labor, the threat of fire, minimal amenities, slow transportation, slow communication, and so on. Technology has eliminated, or reduced, those miseries.

Technology thus has ushered in, on balance, a better era. But where did our technology come from? I would offer that, at a most foundational level, our technology rests on mankind’s ability for inductive reasoning. We have technology because the human mind can see patterns, and extrapolate from those patterns to understand the world, and from that understanding create technology.

Look at other species in the animal kingdom. Some can master simple learning, i.e. hamsters can be taught to push a lever to get food. A few can master a bit more complexity, i.e. a few primate individuals can learn symbols and manipulate the symbols to achieve rewards. Many species, for example wolves and lions, develop exquisite hunting skills. So yes other species can take experience, identify those behaviors that work, and extrapolate forward to use those behaviors to achieve success in the future. We can consider that a level of inductive reasoning.

But the capabilities of other species for inductive reasoning rank as trivial compared to mankind. Even in ancient times, mankind developed fire, smelted metals, domesticated animals, raised crops, charted celestial movements, crafted vehicles, erected great structures, and on and on, all of which, at the basic level, involved inductive reasoning. To do these things, mankind collected experiences, discerned patterns, tested approaches, and built conclusions about what worked and what didn’t. And that constitutes inductive reasoning.

As we move to the modern era, we find mankind implicitly understood, and of course continues to understand, that patterns exist. Knowing the benefits of finding patterns, and understanding the limits of our innate senses, we developed, and continue to develop, techniques and instruments to collect information beyond the capabilities of our raw senses. At first, mankind crafted telescopes, microscopes, increasingly accurate clocks, light prisms, weight balances, thermometers, electric measurement devices, and chemistry equipment. We are now several generations further, and we utilize satellites, particle accelerators, DNA sequencers, electron microscopes, medical diagnostic equipment of all types, and chemical analysis equipment of all variations, to list just some.

With those instruments mankind collected, and continues to collect at astounding rates, information about the world. And we have taken, and continue to take, that information to extrapolate the patterns and laws and regularities in the world. And from those we develop technology.

Take the automobile. Just the seats involve dozens of inductive conclusions. The seats contain polymers, and chemists over the centuries have collected numerous data points and performed extensive experiments to extrapolate the practical and scientific rules required for successful and economic production the polymers. The polymers are woven into fabric, and machinists and inventors over the centuries had to generalize from trail-and-error, and knowledge of mechanical equipment, and the principles of statics and dynamics, to conclude what equipment designs would successfully, and economically, weave fabric. That would be just the seats.

As we have stated, inductive reasoning does not by formal logic produce conclusions guaranteed to be true. We highlighted that with the laws developed by the luminary, Isaac Newton. Einstein’s relativity corrected limitations in the applicability of Newtonian gravity and mechanics. However, that the inductive reasoning of Newton proved less than perfect did not diminished the grandeur or usefulness of his reasoning within the scope of where his laws did and still by-and-large do apply.

Good inductive reasoning stands as a hallmark of mankind’s intellectual prowess, and though it can’t guarantee truth, inductive reasoning can do something most would find equally or more valuable, it can enable progress and understanding.

denitomiadv.com © All rights reserved. | Newsphere by AF themes.