Tuesday, November 15, 2011

variability, attention, and strategy use.

Variability - CEP 903 11.9.07

  • Variability exists between gifted and mainstream children, as illustrated by Johnson et. al. (2003). In their research, younger and older gifted children and mainstream children were tested on M capacity tasks and tasks measuring speed of processing, inhibition, and interference. Results obtained show that gifted children possess superior mental attention capacity, perform higher on speeded tasks, and are better at controlling attention (resisting interference). Consequently, greater mental capacity results in an increase in cognitive resources.

Education implications would tell us that because gifted children focus better, they are less likely to be distracted in class whereas mainstream children would have more difficulty staying on-task in the presence of distraction. Results obtained in this study could suggest that gifted children may have learned to form automatic associations between two concepts (i.e. trail making task – they associate 1 with A, 2 with B, etc) in a way that increases the likelihood that activation of one idea will prime activation of the other idea (1 primes A, 2 primes B, and so on). If this is the case, then they may not truly be better at effortful inhibition, rather, they are better at making associations that allow inhibitions to become automatic.

Gifted children also did not show superior inhibition, suggesting that they were not better at suppressing task-irrelevant information. This finding is illustrated by negative priming and shows that when a distracter stimulus in one trial becomes the target stimulus on the next trial, gifted children had slower response times because previous inhibition dampened the activation level of the stimulus. ADHD children, on the other hand, probably shift attention well, but do not focus well; therefore, they would be less influenced by negative priming.

  • Another study on attention illustrates a second instance of variability in developing children. Jones et. al. (2003) conducted a study in which 3 and 4 year olds were tested on their ability to follow or inhibit instructions given by two different sources in a Simon Says task. Results show that younger children had lower accuracy whereas older children made few inhibitory errors. However, when the older children did make an error, their reaction time on the next trial was long. This could be perceived as an early sign of metacognition, as defined by Miller et. al. (1986). Older children who made few inhibitory errors were trying to figure out what they did wrong while younger children were unable to detect their own errors. As the results would suggest, there is a negative correlation between attentional shifting and ability to inhibit responses.

Educational implications from this article would suggest that preschoolers who shift attention easily have difficulty maintaining focus, which could be an indication of ADHD or some type of attentional problem. ADHD children would lose focus over fewer trials and would experience less trial to trial interference because they would be less aware if they made a mistake. These results are related to Kruschke’s work (2003) in that they suggest that attention to a target requires inhibition of a distracter.

Because accuracy to error-related performance is determined by brain structures in the frontal cortex, development of attention should depend on proper development of frontal brain structures (Nelson). For example, a negative prenatal environment can lead to stunted brain development, which would ultimately be detrimental to one’s ability to reduce error in attention inhibition tasks when compared to normally developing children.

  • Strategy development as studied by Miller et. al. (1986) and Welch-Riss et. al. (2000) reveal variability in 4, 6, 8, and 10 year old children by testing their ability to use strategies effectively. Younger children were less likely to use a strategy or relied on one strategy, whether it worked or not. Older children, on the other hand, were not only more likely to use a strategy, but used a strategy tailored to the task.

Results would argue that increase in age leads to increase in automaticity of strategies for several reasons. First of all, neural connections become stronger with experience (Nelson). Secondly, as illustrated by Johnson et. al. (2003) and Jones et. al. (2003), younger children are less able to utilize multiple cognitive sources (i.e. memory and strategy). Finally, in agreement with Johnson and Jones and as stated by the limited capacity model (Miller et. al., 1986), capacity to access and use the right strategy and monitoring its use leaves little capacity for memory (TOM). Although age contributes to strategy use, so does type of strategy used. Experience is key for becoming automatic at strategies and becoming metacognitive (illustrated by the deception task).

Educational implications include realizing that young children do not have enough cognitive space to attend to multiple things at once and are less likely to use strategies because it requires memory, a resource that is already exhausted because mental capacity is already being applied somewhere else. As Kruschke would infer (2003), what we remember is limited by attention: if we pay attention to A, we have fewer resources available to attend to B.

  • Finally, literacy achievement as described by Reynolds et. al. (1990) show variability in children as they develop. They found differences between less successful and more successful 10th grade readers. More successful readers were likely to use selection attention strategies (SAS) to attend to important information and were better at conceptual tasks. Less successful readers, on the other hand, were less likely to attend to important information because their attention was directed toward perceptual qualities.

Results would suggest that attention does not necessarily equal learning and perceiving does not mean processing if attention and perception are focused on irrelevant properties. Increased metacognitive awareness allowed the more successful readers to utilize SAS to recall the important information.

These findings have several educational implications for literacy achievement. To begin with, they suggest that using appropriate strategies leads to better learning (Siegler, 2007). They also suggest that early reading is important because once language pathways lose stimulation, those pathways are more difficult to redirect and strengthen (Nelson). In order for readers to comprehend, decoding and vocabulary have to become automatic by 1st grade (Juel, 1988).

Domain specific vs. domain general

Pinker proposes that language development is domain specific (1994). In other words, he believes that our abilities are prewired and controlled by separate mechanisms, processes, and pathways. His research is supported by instances in which individuals with language impairments can possess intact intellectual abilities while individuals with impaired intellectual abilities can possess intact language. He states that grammar structure is universal and that brain structure for language is the same in everyone. His argument may be supported by Nelson who states that the brain creates and strengthen new, different pathways in the brain to activate language competency in someone with a speech and language impairment.

Other research suggesting that learning is domain specific is illustrated by Reynolds et. al. (1990). Their research shows that children can have intact decoding abilities (perception) but impaired comprehension abilities (conception). They suggest that effective versus ineffective attention strategies determine reading success or failure and conclude that reading ability is controlled by separate mechanisms.

Smith would argue that language development is domain general and that learning is experience dependent (1999). General purpose mechanisms and associative learning are means by which we learn. Smith stated that words initially have no meaning until they are associated with something. According to the shape bias, children attended to shape in naming objects because as children learn words, the act of naming becomes a contextual cue that automatically recruits attention to shape.

Siegler would also agree that learning is domain general. According to his research, we possess general learning mechanisms that are refined by variability outcomes. Periods of stability (low variability) alternate with periods of transition (high variability) that help us determine which strategies are effective and ineffective. Learning is most likely when previous strategies weaken (due to failure, negative feedback) and new, more efficient strategies strengthen (i.e. infant reaching as described by Smith and Thelan, 2003).

I find the domain general learning mechanism arguments more compelling because they suggest that learning is experience dependent. If learning is prewired and innate, then education and rehabilitation would be useless. Or genes are expressed differently through different experiences and increases in experience lead to confirmation of certain probabilities (Saffran, 2003). Learning must be domain general because otherwise, associative learning would not occur. According to Nelson, learning is experience dependent; therefore, environmental influences must play a role. If a child with a brain impairment was viewed as unable to change, then what would be the point of teachers, speech pathologists, occupational therapists, physical therapists and so on? Likewise, how would we explain individual variability? If our brains were composed of specific mechanisms that direct learning, then we would be more alike than different.

The only way to reconcile these different perspectives is to realize that nature and nurture both contribute to learning. Our learning is experience dependent; however, our range of reaction and early experiences put constraints on later change and development of our abilities. Our brains do contain structure, but we all rely on external support for brain functioning. Infant brains, for example, have greater plasticity and are therefore more susceptible to learning. We cannot assume that all of our learning is due to domain specificity or domain generality because both are crucial to our development.

cognitive learning mechanisms

Teaching reading - CEP 903 10.5.07

  • Statistical learning, as introduced by Saffran (2003), suggests that language is shaped by human learning mechanisms rather than innate brain structures. We learn to predict statistical probability by becoming familiar with sequences in language. For example, presenting the word ‘the’ or ‘a’ in text serves as a cue, helping us predict that a noun will follow it. Therefore, an increase in language experience increases our ability to confirm certain language probabilities. Teachers can utilize Saffran’s work when teaching reading to students by exposing them to print at an early age, focusing on language concepts such as phonetic features, word boundaries, and syntax.
  • Embodiment refers to a concept claiming that we learn in our bodies and interact with our environment to obtain knowledge. Our behavior is a result of encoding perceptions. In a study conducted by Noice and Noice (2001), for example, movement facilitated recall even without intent because environmental cues provided scaffolding. Their finding suggests that active experiencing helps create meaning and enhances memory because real actions have properties that language descriptions do not. Teaching reading that utilizes embodiment could include role playing parts of a story or using concrete objects and hands-on experiments to help students visualize abstract concepts (Noice and Noice, 2001; Gentner, 2002).
  • Test-enhanced learning suggests that testing students on material is more effective for retaining information in long term memory than repeated studying of the material (Roediger, 2006). Studying (or rereading) is different than retrieving information from our brains (testing) because the latter allows us to practice the skill actually required on future tests. Knowing this, an effective approach for teaching would be to expose students to reading material and administer periodic testing over the material. Students who are taught by this method would have the opportunity to demonstrate what they have learned on multiple occasions, each time strengthening their knowledge of the material.
  • Maternal elaboration is found to effect child elaboration and memory over time (Reese, 1993). Utilizing elaborative techniques provides scaffolding for the learner in which he/she is able to re-experience learning the information each time a new elaboration is introduced. Reese’s findings provide implications for teaching in that they suggest effective ways for teachers and students to interact. Teaching by elaboration rather than repetition would provide bidirectional scaffolding that ultimately increases teacher-student interaction, facilitating communication and recall of information. In other words, asking a variety of questions over reading material allows the learner to make more connections in the brain because more resources are being activated whereas repeating the same question often does not provide adequate scaffolding for the learner.
  • Glenberg’s memory model describes memory as being embodied by combining different sets of actions together (1997). In order to remember something, we either use clamping to ignore memories and attend to the environment or use suppression to decrease our current perception and enhance memories. Limitations on learning according to Glenberg are based on possible next steps that we know to be true because of our experiences. For example, in repetition priming of language, our previous exposure to reading material facilitates our current ability to process it. Teaching implications that utilize repetition priming are related to our ability to statistically predict language (Saffran, 2003) and mesh concepts (Glenberg, 1997). Teaching students to remember what they read can be modeled after repetition priming by creating multiple opportunities for students to associate concepts by meshing related words. For example, presenting the word ‘volcano’ and allowing students to choose from a list of words that are related to the word primes ‘magma’ or ‘lava’ but not ‘dog’ or ‘cat.’ ‘Magma’ and ‘lava’ are likely stored in the same category in the brain whereas ‘dog’ and ‘cat’ are not; therefore, memory retrieval is strengthened.

Empirical evidence

Brain research provides compelling evidence for learning and development in that it gives specific examples of how bidirectional change affects brain development (Nelson, 1997). As a basic explanation of Nelson’s findings, we know that learning and thinking equal brain change. Bidirectional change can be described as experiences that lead to brain change and a changed brain that leads to changed experiences. For example, an infant brain possesses great capacity for change because neurons in the brain are initially uncommitted. Once connected, the neurons become stronger and neuroplasticity decreases. As a result, each experience sets limits for later change.

Infancy can be considered a foundation for future learning in which boundaries for brain development are set by a range of reaction. An infant’s range of reaction puts constraints on intellectual ability and positive or negative results are manifested by environmental influences. Range of reaction could be determined by a number of factors. Stressful pregnancy and malnutrition are biological influences that contribute to decreased brain development. Poor parenting and lack of exposure to educational materials are environmental influences that contribute to negative bidirectional change. In each case, fewer opportunities are available for high intellectual ability.

Brain research has clear implications for education in that it gives teachers hope that the brain has plasticity. At the same time, it supports the notion that parents have a crucial role in shaping healthy early learning experiences. Furthermore, it allows teachers to understand why children have different intellectual abilities and behave differently. Brain research confirms that early intervention is more effective than later intervention because it provides more opportunity for positive brain change and experience.

Unlike the convincing evidence found in Nelson’s research, Reese’s evidence suggesting that maternal elaboration is an effective technique for facilitating childrens’ memory is less compelling (1993). According to Reese, highly elaborative mothers facilitate child elaboration and memory of information. The evidence supporting this claim argues that elaboration creates scaffolding for recalling past events and that a child is able to contribute more information about these events if the mother is elaborative.

The first problem with these findings is that they are ungeneralizable. The study results show that high elaborative style is associated with girls more than boys. If boys are exposed to less elaboration, they will inevitably become less elaborative as an adult. In other words, socializing girls to elaborate more begins as a small difference that becomes magnified over time. As a result, dads are likely to be much less elaborative. This brings me to my next criticism. Father elaboration style was not addressed perhaps because it would fail to support that the concept of high elaboration style is generalizable. This study also fails to take environmental factors into consideration. Ability to use elaboration clues may vary according to a child’s ability to block out distracting information in the environment. If memory is enhanced by scaffolding, it seems that it should be worsened by irrelevant cues in the environment. Finally, misleading elaborations may foster the recall of inaccurate information; therefore, high elaboration style from a misinformed source may hinder rather than help recall accurate information.

Sunday, November 6, 2011

free will vs. determinism.

free will vs. determinism. PHI 101, 4 April 2003

as Americans, we would all like to believe that we are free. however, does the word "freedom" really allow us to choose the path of our actions? we think that our desires are justified by our actions, but what about the idea that all of our actions are causally determined? how can we say that what we do is completely a reflection of our own free will and not the consequence of a previous act? what about responsibility? if all of our actions are determined, then how can we be responsible for those actions? are these two positions the only solutions to the argument for free will and determinism, or is compatibilism another possibility? essays written by Nagel, Taylor, Wood, and Stace can all contribute to the arguments and details for and or against these issues and attempt to unravel the uncertainties.

Thomas Nagel introduces the topic of free will and determinism in the book What Does It All Mean? his primary example in this selection for free will deals with choosing between two options. in this particular scenario, a person is given the choice between choosing a peach or a piece of chocolate cake. he states that this is a situation where free will is applicable because there is no inevitable force making the person choose one over the other.

in order to take on the determinist perspective, Nagel claims that circumstances that exist before we act determine our actions and make them inevitable. in other words, other possibilities are ruled out and there is no other choice given the circumstances. referring back to the peach and the piece of chocolate cake, if it were not predetermined that one would choose the chocolate cake, and the decision was made without explanation, then how could it have been that individual's own doing? if it was determined that one would choose the chocolate cake instead of the peach, then how could one have also chosen the peach? if all of our actions are determined, then how is it possible that we are able to make decisions if making decisions means that there ought to be alternative choices?

I believe that the combination of free will and determinism is the answer to these questions. the decision to choose the piece of chocolate cake may be one's own free will, but his desires are also an agent of cause. therefore, the desire to choose the piece of chocolate cake functions as the motivating factor, or the cause, for the free act to be carried out. on the other hand, if one has an equally strong desire for the peach and the piece of cake, he can still deliberate in order to choose one or the other, or neither.

Richard Taylor, the author of the essay Libertarianism: Defense of Free Will believes that free will exists, but only to a certain extent. in an attempt to justify this belief, he explains that human beings are sometimes the causes of their own behavior. however, he does not seem to support the idea of compatibilism because he claims that it makes no sense to ask whether the causes of one's actions (decisions and desires) are themselves caused. if there is no cause for our desires, then why would we choose to act on them at all? surely there are reasons for what we desire and base our decision on.

for example, if a man were to think about robbing a bank, there must be some rationale behind his thoughts. maybe he is poor and cannot afford to feed his family, hence he may feel that his family's needs are a plausible cause for him to steal the money. or even if he wanted the money simply to satisfy his own greed, he would still be making the decision based on that desire. in another instance, say that a man's wife has just gotten a hair cut that he thinks looks horrible, but he wants to avoid hurting her feelings when she asks how it looks, so he lies and tells her that it looks wonderful. the decision he made was probably based on his desire to keep her happy, not completely upon his own free will. his decision was swayed by the fear that he might offend his wife, so he used that as grounds to tell the lie.

in another excerpt called Determinism: Free Will is an Illusion by Ledger Wood, a new perspective is taken. while defending determinism, he rejects libertarianism. the basis to his argument is that all events are causally explained, and that whatever happens at any given time is the effect of some antecedent cause. with reference to Nagel's example, Wood defends determinism by stating that if two choices are equally appealing, then one would pick neither. but what if the person was forced to choose, or perhaps told that his survival would depend on whether or not he made a decision? say for example that an intelligent high school student is looking for a college to attend because he feels that a college education is an asset to his survival and in future career opportunities. if all the positives and negatives of two equally appealing colleges are weighed and he cannot decide which school he would prefer, it is highly unlikely that he would choose neither. it matters not which choice he makes, but the point is that he will eventually make a decision.

something that must be taken into consideration when discussing free will and determinism is the concept of responsibility. according to Wood, all of our actions are determined which suggests that we have no control over them. if this is the case, then how can we be held responsible for anything we do? likewise, if we have no control over our actions, then how can they be called our own? if universal causality is an unfalsifiable concept as the book suggests, then how can it be justified at all? just because the cause cannot be found does not mean there is not a cause, nor does it mean that there is one. therefore, it is necessary to reject the principle of determinism.

if a ten-year-old girl is raped and murdered, and we apply determinism to the situation, how can we justify the actions of the rapist and murderer? it would make no sense to simply state that his actions were predetermined so therefore he cannot be held responsible. of course there may be some underlying cause, but there must have also been some free will because the man could have chosen otherwise. but because he made the decision based on a set of desires, we have to call it free will. had those desires been different at that time, he might have chosen differently.

from a fourth perspective, W.T. Stace, the author of Compatibilism: Free Will is Consistent with Determinism, declares that the two concepts can, in fact, be reconciled. the first point he makes with regards to responsibility is that we must have free will in order to be morally responsible, yet our actions can be caused. if human actions were just causally determined, then how could punishments and rewards have any meaning? if we could predetermine a man's behavior as good, why would there be any reason to reward his good behavior? likewise, if determinism controlled all of our actions, it would be unjust to punish a bad person for his actions which he cannot escape. if he is not free to make his own decisions, then he cannot be blamed for what he may do wrong.

as the starting point for his compatibilist argument, Stace emphasizes the idea of free acts which are voluntary, and unfree acts which are involuntary. by relating free acts to internal causes and unfree acts (determined) to external causes, he illustrates his argument quite clearly. in other words, he believes that free acts are caused by psychological desires in the agent's mind while unfree acts are caused by physical forces outside the agent.

his most plausible example describes a scenario where a man on the street asks a man for his wallet and then threatens to shoot him if he refuses. if this situation is looked at from different perspectives, it seems that the free will and determinist arguments alone do not make much sense. if the man were to act according to his own free will, of course he would not voluntarily want to give the wallet up to the stranger. however, if, in fact, he did give the man his wallet to avoid being shot, it would most certainly not be simply due to his desire to do so, but more likely due to his fear. if we apply determinism here, it would mean that if the man gave up his wallet, it would be because there was a previous cause, which in this case would probably be the man holding him at gunpoint.

we can plainly see that both free will and determinism are necessary to explain human behavior. none of the decisions we make are entirely free because they include an antecedent of some kind. however, not all of our actions are based on just antecedents. with each decision we make, there is an extent to which our deliberation helps us make that determination; therefore, free will and determinism are compatible.

the argument against religion.

the argument against religion. PHI 101, 24 February 2003

religion in the 21st century is controversial to say the least. even dating back to the 18th century when William Paley wrote The Watch and the Watchmaker, it seems obvious that seeking answers to faith and religion are undefined and highly questionable. in this essay, Paley claims that the existence and purpose of a watch can be compared to the existence and purpose of the world. the question I must ask then is this - how can something so mysterious such as our world be compared to that of clockwork?

in the beginning of Paley's argument, he explains that a stone lying on the ground cannot serve any purpose because it cannot be proved that it was intended to be there and that it could have been there since the beginning of time. he follows this explanation with the argument that a watch being found on the ground is completely different because people can understand that a watch is created for the purpose of telling time. my argument against this is that a stone alone might seem purposeless, but when combined with other stones, it can be used to create foundations for buildings or roads which are part of everyday life. likewise, a watch broken down into its parts would be basically useless, but once it is constructed into an actual watch, it serves a great purpose.

referring to the question of god's existence, many people claim that in theory, god created humans. not only is it impossible to find concrete evidence for the existence of an intangible being, it is unfeasible to imagine that such an indefinable being could create other purposeful, tangible things. for example, just like the stone and the watch, parts of a human body broken down would not serve a very great purpose. how could human beings function with just an arm or or just a leg? it simply is not possible because humans are too complex to be compared with the mechanical functioning of a watch. even if human behavior was predetermined, we would still never be as predictable or as consistent as the ticking of a watch. the argument in this sense is not plausible because it does not take into consideration the stone and the watch under the same circumstances; therefore, they cannot be paralleled to the existence of god and what is assumed to have been created by god.

in another instance, Paley describes a watch that may malfunction or may not be perfect, but its design and purpose are still recognizable. this argument is weak simply because the watchmaker did not intend for the watch to malfunction. god creates humans beings, none of which are completely perfect, so in a sense, Paley is suggesting that we are all intended to become malfunctioning watches. a perfect watch would keep perfect time, but a perfect human does not exist. throughout the essay, he does not even address what god intends as the purpose of human beings, so how can it be compared to the watch which seems to have such a clear purpose? what constitutes the characteristics of a "perfect" human being? these are questions that cannot be answered and therefore render Paley's argument incredible.

the final point I will mention concerning Paley's case for the existence of god is in his application section where he claims that every  manifestation of design that exists in the watch must exist in nature. however, he then explains that nature surpasses the art of a watch in complexity, curiosity, and variety. this being so, how is it even reasonable to attempt a comparison between the two when it is obvious that based on these facts alone that they are different? a watch's purpose is mechanical and unchanging while a human's purpose, although it cannot be concretely defined, is clearly not this. we are humans in a diverse, mysterious world in which nothing is really meant to be unchanging or completely unpredictable. sudden, random, and sometimes unexpected events such as earthquakes or tornadoes occur every day in nature, whereas the possibilities for the outcomes of a watch are very limited: it either works or it does not work. the idea of a grand designer for this world cannot be held accountable for the purpose of its creatures.

what is knowledge?

back in the day (which despite what others may tell you, was NOT a Wednesday), I was an astute philosophy student. to this day, I often revisit my term papers to compare my current views and beliefs on topics to what I believed back then. surprisingly, my moral compass has not shifted much and I tend to believe a lot of the same things today as I did in 2003. so, if you're looking for some explanations, read on!

what is knowledge?
PHI 101, 24 January 2003


If all 6.2 billion living people were asked "what is knowledge?", it is quite possible that one would arrive at 6.2 billion different responses. along with the inquisition of such a different word comes the variation of each individual's philosophy of knowledge. in such as instance, it becomes necessary to define which explanation of knowledge is most plausible through attempting to analyze and justify what is believed to be true. by examining both Plato's Theory of Knowledge and Hume's Enquiry Concerning Human Understanding, it is apparent that each philosopher has taken a very different approach to the proposed question.

Plato, an ancient rationalist philosopher, proposed a theory of knowledge stating that reason is the key element to true knowledge. throughout Plato's dialogue, he distinguishes reason from sense perception and explains that they are two different worlds: the world of being and the world of becoming, respectively. in other words, Plato believed that reason leads to the world of being, or unchanging and absolute ideas which are independent of experience (a priori).

Hume, contrastingly, was a skeptical empiricist who believed in acquiring knowledge through sense perception. his theory argued that knowledge comes with experience and is comprehended through impressions. as a skeptic, Hume also doubts knowing anything for sure, arguing that certainty, at best, is only probable because knowledge acquired through experience is based on ideas (reason), rather than impressions (sense perception).

the philosophy of knowledge cannot be summed up by rationalization simply because the concepts of innate ideas and reason do not present themselves plausible. experience and sense perception are imperative elements for acquiring knowledge. for example, if I were to touch the burner on a stove with my hand for the first time, I would quickly pull my hand away to relieve it from the scalding heat. a rationalist, who believes that knowledge is independent of experience, would say that I knew prior to touching the stove that it would burn my hand. but, had I known that, I would never have touched the stove in the first place. an empiricist, on the other hand, would claim that I knew not that the stove would burn me, and that only after I had experienced the burning would I realize that I should not touch it. even if I had known in advance that the stove would burn me, I would have acquired the knowledge by either watching someone else burn his or her hand and react in the same way, or through verbal communication, two forms of sense perception.

not only is experience imperative for acquiring knowledge, sense perception also contributes to the comprehension of simple, even complex, concepts. in Hume's Enquiry Concerning Human Understanding, he addresses the idea that lacking certain senses is associated with the inability to understand specific corresponding ideas. he names an instance in which a man who is deprived of sight cannot see colors. for example, since he has lost his ability to see, it would be impossible for him to understand that grass is green, a concept rationalists claim can be grasped through reason. obviously, he would have no idea what green looked like even if it was explained to him because he would not be able to relate the idea to something he understood. however, if his sense of sight was suddenly restored, he would find no difficulty understanding the concept of green.

in conclusion, it can be said that if, in fact, we can know anything, it can only be known through experience and sense perception. knowledge is not something that can be touched with a hand or seen with the eye, so whether or not it even exists or can be justified at all is uncertain. however, reason can still only give us the intangibles and phenomenon which can never be explained thoroughly without some sort of skepticism, whereas sense perception presents a clear impression of true beliefs.