Assessment in Adult Education: Affective, Formative, Classroom, and Summative

12/07/2011 17:00



Assessment in Adult Education: Affective, Formative, Classroom, and Summative







A paper submitted in partial fulfillment of course requirements

Author: Mr. Alvin Wallace

Course: ED7712

Instructor: Dr.  Sherion Jackson





Table of Contents

Introduction section ……………………………………………………………………………………………..p.3

Description of the type of students/ Target audience…………………………………………………………..p. 3-9

Assessment section:

The nature of the assessment……………………………………………………………………………………. p. 10 - 39

Teacher-Designed Feedback form………………………………………………………………………………..p. 40 – 46

Accommodating students with special needs…………………………………………………………………….p. 46 – 56

Addressing the Diversity Issues……………………………………………………………………………….…..p. 57 – 60

Interpretation of Results…………………………………………………………………………………………...p. 60

The Asynchronous discussion or Forum Participation Scoring Rubric………………………………………...p. 61 – 64

Portfolio Scoring Rubric……………………………………………………………………………………………p. 64 – 72

Comparison Criteria………………………………………………………………………………………………...p. 64 – 71

Validity as an Issue…………………………………………………………………………………………………...p. 72 – 75

Conclusion…………………………………………………………………………………………………………….p. 75 – 81

Bibliography…………………………………………………………………………………………………………..p. 82 - 84























Introduction section: Description of the type of students (their ages, academic status, and so on)

Target audience


The term “generational groups” can used to describe basic differences among the three groups that comprise “the majority of continuing higher education students today: Baby Boomers, Generation X, and Millennials, according to  Sandeen (2008) This euphemism alludes to  the “generation’s world view or ‘peer personality’” and “can be attributed to the social context that existed during the youth phase of each generation,” It is an attempt to describe the “social context for each generation as youth followed,” “each generation’s major concerns today,” or “each generation’s basic career orientation.” Yet the higher education arena is being inundated with the so-called “non-traditional student,” who following Harvey (n.d.) comes from ”social classes, ethnic groups or age groups that are underrepresented,”  “may include gender groups in some areas.” In countries such as Canada, USA and the UK they “tend to be a (recent) high-school leaver (around the ages of 17-20), from (upper) middle classes,” in American usage the term refers to students as higher education institutions, whom are not of the typical age or social situation as the majority of their peers,” “with or without physical or learning disabilities,” “may have achieved their GED late, former homemakers preparing to join the workforce, unsuccessful business people training for a different profession, or an individual using a motorized wheelchair or an animal companion.” They have, in some cases previously college and “is returning” after a break of several years, or they may have just graduated from high school and “went directly to the workforce.” At any rate, it is “the fastest growing segment of the student population.”


Daigre(n. d.) stresses the point that “the instructional designer must be able to identify the target audience.”  To maintain the prescribed focus on the targeted audience this section will attempt the construction of an audience profile, hoping to design appropriate parameters that resonate with the key characteristics of the IT learners it targets. The suggested assessment design presumes the use of a hybrid course design, it also assumes that issues like a) the amount of funding available from government sources, b)the composition of the student market, and c) the degree and quality of the responsiveness of which institutions of higher education to which segments of the market are knowable. Swail (2002) argued that “more students from all backgrounds are attending college than ever before,” and yet “ large gaps still exist in who goes where and who completes degree programs.” Using statistical argumentation based on demographic trends Swail (2002) asserts that “low-income and first-generation students, as well as students of color, are less likely to attend four-year institutions and to persist through degree completion than are more advantaged students.” Also the “demand for postsecondary study is at an all-time high for both students of traditional and nontraditional ages; for-profit and certificate-based providers are becoming more the norm than outliers.”

Moncarz (2002) states that “the information technology workforce is defined differently by trade organizations and Government sources.” The Information Technology Association of America  uses the eight career clusters developed by the National Workforce Center for Emerging Technologies. The IT career clusters include “programming and software engineering, technical support, enterprise systems, database development and administration, Web development and administration, network design and administration, digital media, and technical writing.” The U.S. Department of Commerce report “Digital Economy 2002,” defines “workers in information technology occupations as those who design, manufacture, operate, maintain, and repair information technology products and provide related services across all industries.” Moncarz (2002) cited Rita Caldwell, director of the National Science Foundation , who “notes that there are many pathways for becoming an information technology worker” because the training can range “from a few months for certification to 6 years for a doctoral degree.” This assessment is designed for the  IT learner seeking to obtain the Cisco Certified Network Associate, i.e., CCNA. The analysis of the US Department of Labor statistics shows that a decade ago “most information technology workers—almost 70 percent—had a bachelor’s or higher degree, although the number who had some college but no degree is rapidly increasing and accounted for almost 16 percent of these workers.”



Source: Moncarz, R., What is an information technology worker,  Occupational Outlook Quarterly , Fall 2002, p. 40 retrieved 11/09/2011 from http://findarticles.com/p/articles/mi_qa5448/is_200210/ai_n21319410/?tag=content;col1


Moncarz (2002) using “anecdotal information” contends that it” suggests that many people attend community colleges not to earn degrees but to take computer-related courses in hopes of getting a job or as a way to retrain and update their skills. Regarding the growth of certification, Moncarz (2002) reported that the Information Technology Association of America’s study on the information technology workforce acknowledges “the significance of certification” which “has grown in each of its job categories in the last year.” Moncarz(2002) also cited the remarks of  Kenneth Bartlett, project director for the National Research Center for Career and Technical Education, to the effect “ that as of August, there were almost 100 vendors and organizations offering more than 670 separate certifications in information technology.” The U.S. Department of Education’s National Center for Education Statistics demonstrate” that the number of awards of less than 1 year granted in computer and information sciences grew almost 400 percent between 1990 and 2000.  “ Moncarz (2002) when remarking on the popularity of the certification route mentioned Clifford Adelman, author of The Certification System in Information Technology,” who “describes a ‘parallel universe’ outside conventional educational routes  for potential information technology workers to develop skills.”

The aforementioned demand is encapsulated in a trend that finds distance education “proliferating at all types of institutions” institutions of higher learning “are being pressed to serve a student body that is vastly different from only a few decades ago.” The reality is that the instructional designer is faci“a dramatically different cohort of high school students is preparing for postsecondary study.” As such the reality also included this facing a group of prospective students that “will be much less prepared for college than the current entering cohort.” Swail (2002) cited Kipp(1998) to highlight the fact that, the “most rapid growth in the population will be among groups” that are: a) "traditionally more likely to drop out of school,” b) “less likely to enroll in college-preparatory course work,” c) “less likely to graduate from high school,” d) “less likely to enroll in college,” and e) “least likely to persist to earn a baccalaureate degree.” Yet they constitute a real market segment to be addressed. Elsewhere this writer has tried to draw attention to the macro- and micro- political issue sets that attach to and often undermine any effort to institute diversity as a policy within an institute of higher education. Often, the micro-political behavior of some learners will thwart even-handed attempts by the instructor to invoke ethical sanctions. Perhaps, more attention to the learning styles of the students will help to underscore the fact that such micro-level machinations are not the official position of the school itself. These negative manifestations can be categorized as affective behavior manifested for whatever reason. But the fact that the learner population also can be typed means the design of the assessment should allow for the affective styles associated with the geek, so to speak.

Learner styles are key in developing effective instructional design materials. The instructional designer must be able to identify the target audience. Prior to identifying the target audience, the instructional designer needs to have an understanding of learner styles or characteristics. Smith and Ragan identify four categories of learner characteristics: cognitive, physiological, affective, and social. Each of these categories is important and may or may not be used all at one time in a learner analysis. This portion of this essay will address, as suggested by Daigre(n. d.) “the target audience interests, motivational level, attitude, perceptions, self-concept, anxiety, beliefs, and attribution,” as well as the “ social characteristics” or “how the learner relates to his or her peers,” “authority,” and ”cooperation or competition” tendencies. The socioeconomic background of the learner and social characteristics such as racial/ethnic background, affiliations were addressed earlier. The actual test instrument is composed of ten multiple choice, four true-or-false items, and two essay itemsPage semi-protected

IT learners have been referred to as Geeks, a term said to imply “different meanings” including ranging "a computer expert.” A slang term it is used to present a pejorative interpretation of someone. It is a pejorative, in the same sense as the terms “nerd, gimp, dweeb, dork, spod and gump” and has a similar meaning. Richard Clark “in a 2007 interview on The Colbert Report, stated the difference between nerds and geeks is ‘geeks get it done. ‘ " Julie Smith is credited with defining a geek as "a bright young man turned inward, poorly socialized, who felt so little kinship with his own planet that he routinely traveled to the ones invented by his favorite authors, who thought of that secret, dreamy place his computer took him to as cyberspace.” She added an additional dimension that suggested something akin to personality disorder, i.e., “dissocial” and/or “obsessive-compulsive”. As she put, “somewhere exciting, a place more real than his own life, a land he could conquer, not a drab teenager's room in his parents' house." Regarding the first, which manifests behaviorally as “failure to conform to social norms”; “deception, as indicated by repeatedly lying, use of aliases, or conning others for personal profit or pleasure;” “impulsiveness or failure to plan ahead;” “irritability and aggressiveness, as indicated by repeated physical fights or assaults;” “reckless disregard for safety of self or others;” “consistent irresponsibility, as indicated by repeated failure to sustain consistent work behavior or honor financial obligations;” “lack of remorse, as indicated by being indifferent to or rationalizing having hurt, mistreated, or stolen from another;” “the individual is at least age 18 years;” there is evidence of conduct disorder with onset before age 15 years;” and “The occurrence of antisocial behavior is not exclusively during the course of schizophrenia or a manic episode.” The last is “characterized by a pervasive pattern of preoccupation with orderliness, perfectionism, and mental and interpersonal control at the expense of flexibility, openness, and efficiency.” Some of these can prove useful, at times for the learner but they can also present reason for caution among fellow learners. Should the instructor have cause for concern about a particular learners behavior then the counseling department will be contacted. Other definitions include, a) “a derogatory reference to a person obsessed with intellectual pursuits for their own sake, who is also deficient in most other human attributes so as to impair the person's smooth operation within society;” b) “a person who is interested in technology, especially computing and new media. Geeks are adept with computers, and use the term hacker in a positive way, though not all are hackers themselves;” and c) “a person who has chosen concentration rather than conformity; one who passionately pursues skill (especially technical skill) and imagination, not mainstream social acceptance.” This latter set of definitions were chosen because they accentuate a certain “intensity, depth,” or focus on IT coursework as the “subject of their interest.” The subject matter of the course to be taught comes from the world of IT. Being certified in computer networking was a very hotly pursued ranking among those who populated the business and academic worlds during the first decade of this century. The requisite knowledge to operate the equipment used to get the sought after “connected effect” was, at the time dearly sought after.





Assessment section:

The nature of the assessment

This proposed assessment  will attempt to evaluate the knowledge associated with computer networking in a by testing the learners requisite knowledge of  techniques, methods and procedures across the range topics associated with computer networking : 1) The OSI model, 2)LAN switching, 3)VLANS, 4) LAN Design, 5) Routing Protocols, 6) Access Control Lists (ACLs), 7) WAN Design,  and 8) Network Management. (Lorentz, 2001, p. 1-164) The specifics of the range of topics is delineated in the learning objectives. While a significant portion of the examination refers to the curriculum referenced, which  is that used by the so many institutes of higher learning to prepare the learner to test for industrial certification. The content of which is analogous to the course content delineated  above. The specific assessment proposed is intended to evaluate the learner’s synthesis of the knowledge contained in the first seven sections of the course content cited earlier, which incorporates the- cognitive dimensions of knowledge as delineated in the Revised Edition of Bloom’s Taxonomy of Educational Objectives(Gronlund and Waugh, 2009, p. 223 – 226). The multiple choice and true/false items target factual, conceptual and procedural knowledge, while the essay questions are design to assess the higher order thinking skills(hots) of analysis, synthesis and evaluation.





Affective assessment


Angelo & Cross(1993, p.3) advocated the use of observation by the instructor to gather a “collection of frequent feedback on student learning,” to aid “the design of modest classroom experiments,” with the goal of determining “how students learn and, more specifically, how students respond to particular teaching approaches”. Ultimately, this information would obtain utility when the instructors sought to “refocus their teaching to help students make their learning more efficient and more effective.” This advice was offered after corroboration of the off-point assumption of college instructors that “their students were learning what they were trying to teach them”. Yet “too often, students have not learned as much  or as well as was expected. There are gaps, sometimes considerable ones, between what was taught and what has been learned.” This portion of the paper will seek to use an empirically based set of research findings to explain and refine guidelines to foster and enhance the learning experiences of the CCNA networking learner, i.e., that corpus of literature that relates to cognitive (and/or learning) style, affective assessment and self-efficacy studies. The argument presented will relate more to formative, rather than summative assessment and any proposed assessment vehicle should be used in a pre-/posttest format. Specifically, as Angelo & Cross (1993, p.5) wrote the focus should be on classroom assessment as a subcategory of formative assessment, where the purpose is “to improve the quality of student learning, not to provide evidence or grading students; consequently, many of the concerns that constrain testing do not apply.”

This section of the document has the purpose of specification of the domains of Information systems development, where the design, implementation and troubleshooting of a network should be understood to be one aspect. Using Habermas’ orientations and Etzioni’s domains, Hirschheim, Klein, and Lyytinen (1996) have formulated a way to talk about knowledge as “intellectual structures of information system development. Domains, orientations, object systems, and development strategies. Alonsabe (2009) portrayal of the behavioral objectives in taxonomical form is shown as

Alonsabe (2009) used following focal concepts: a) “attitude,” b) “motivation,” and “self-efficacy. Foundationally, Alonsabe (2009) states “attitudes” can be defined as “a mental predisposition to act that is expressed by evaluating a particular entity with some degree of favor or disfavor.” and also,  that “individuals generally have attitudes that focus on objects, people or institutions,” while adding that “attitudes are also attached to mental categories,” Alonsabe (2009) states that “mental orientations towards concepts are generally referred to as values.” The functional components of attitudes according to Alonsabe (2009) includes: 1. Cognitions –“ beliefs, theories, expectations, cause-and-effect beliefs, perceptions relative to the focal point; statement of beliefs and expectations which vary from one individual to the next;” 2. Affect –“ refers to feelings with respect to the focal object – fear, liking, anger; color blue refers to loneliness); others as calm or peace;” 3. Behavioral intentions – “our goals, aspirations, and our expected responses to the attitude object;’ and 4. Evaluation – “central component of attitudes; imputations of some degree of goodness or badness to an attitude object; positive or negative attitude toward an object; functions of cognitive, affect and behavioral intentions of the object; stored in memory;” the designer can rightfully expect that the display of the “relationship between TSE and CSE,” will be exhibited in the psycho-motor behavior and also the attitudes evinced as the learner works through the performance/classroom assessment, this as well as the resident “motivation.” The latter Alonsabe (2009) states is “a reason or set of reasons for engaging in a particular behavior.” It  include “basic needs, object, goal, state of being,” and any “ideal that is desirable.”  Alonsabe (2009) asserts that the term “also refers to initiation, direction, intensity and persistence of human behavior.” Regarding the last of the focal concepts, “self-efficacy,” Alonsabe (2009) states that it is “an impression that one is capable of performing in a certain manner or attaining certain goals,” or the “belief that one has the capabilities to execute the courses of actions required to manage prospective situations,” i.e.,  “ a belief (whether or not accurate) that one has the power to produce that effect.” It is certainly a perception that is either proved or disproved during the assessment session, does the learner exhibit the ‘ability to reach a goal,” where “over-efficaciousness negatively affected student motivation, while under-efficaciousness increased motivation to study.” Any of the techniques cited by Angelo and Cross(1993, p. 255-316) can be used to monitor and assess this affective dimension. Admittedly, this is “self-report,”  as opposed to instructor observation but it is amendable to computer-based tabulation and scoring using either “rating scales” or “semantic differential (SD) scales,” and such can support the formation of inferences useful to modify content delivery.



Hirschheim, Klein, and Lyytinen (1996)  arranged the display of three information systems development domains: Technology, which incorporates a) Information technology systems; which contains 1) hardware and telecommunication configuration; 2) program structures and modules; and 3) database and file structures, having a control/ instrumental orientation; b) Language as a domain where Formalized symbol manipulation systems; are populated by 1) data models and dictionaries; 2) data integrity mechanisms; 3) screen and form designs; and 4) model management which are cited as being part of the control/instrumental orientation. The control/strategic orientation contains the elements: 1) Manipulative communication systems; and its elements 1) definition of terms and rules; 2) communication channels; 3) access rights; and 4) data integrity. The sense making/communicative orientation  is the second component of the language domain and includes: Symbolic interaction systems; and the elements: 1) speech acts; 2) intentions; 3) meanings; and 5) metaphors. The third and last orientation contained in the language domain is that of  argumentative/discursive, which contain Systems for rational argumentation which is composed of 1) arguments; 2) warrants; 3) breakdowns; and 4) pragmatic inference. The last domain of information systems development is organization, it also has four orientations, the first of which control/instrumental contains mechanistic social systems composed of 1) tasks;2) decision processes; 3) business processes; and 4) organizational structures. The set orientation set, control/strategic is has subsumed under it political systems; 2) power structrues; 3) resource dependencies; 4) interest groups; 5) sources of authrity; 6) indirect influence; and 7) negotiated orders. The sense-making/communicative oreintation incorporates:  Cultural social systems composed of 1) values; 2) beliefs; 3) myths; 4) rituals; and 5) negotiated meanings and practice. The final orientation, argumentative/discursive contains: 1) Systems for institutional checks and balances and the elements 1) domination free discourse; 2) justification and minimization of power; 3) truth and justice; and 4) due process. The reader has to have noticed the use of the slash to denote the intersection or as Hirschheim, Klein, and Lyytinen (1996) put it, the “cross-relating of domains and orientations. These mark the set of abstractions termed, “object system classes”. They are functional groupings of elements subsumed at the intersection of domains and orientations, referring to the work of Lyytinen(1987) and Welke & Konsynski (1982), Hirschheim, Klein, and Lyytinen (1996) state that the concept is a “object system class” as a“ succinct mechanism to abstract the fundamental ways of conceiving and classifying the variations in the targets and the behaviors associated with ISD,” while the concept of” orientation” serves as a “lens to typify and classify the range of human intentions and behaviors exhibited during systems development.” The concept of a “change domain” is used to embody and classify “the targets of systems development.” They combine to portray  “nine different object system classes that identify the major intellectual structures

that explore possible changes brought about in ISD,” of which network design, implementation and troubleshooting is but one.


The domains technology, language and organization are populated by cells labeled information technology systems, formalized symbol manipulation systems and mechanistic systems all of which refer to some aspect of computer networking and should be regarded as knowledge clusters, so to speak. The use of classroom assessment, however, is intended to mine and elucidate the last two cells in the language and organization domains for clues, hints and signals that will inform the improvement of content delivery in the classroom. This hinges on the fact that they provide the “context,” and a language to talk specifically about what works of a particular class or learning module. Of course, as Angelo & Cross(1993, p.3) point out, “what works well in one class will not necessarily work in another.” This because “each class has its own particular dynamic, its own collective personality, its own chemistry….” The use of affective assessment goes to attending to the needs of the individual learner, in the sense of recognition of the fact that “each individual student brings a complex mix of background variables to the course. The student’s socio-economic class, linguistic and cultural back ground, attitudes and values.” Things like” level of general academic preparation, learning strategies and skills, and previous knowledge of the specific subject matter” and his or her “performance in the course,” or self efficacy.

Alonsabe (2009) in her research defined the affective domain as a part of a system developed in 1965 for “ identifying understanding and addressing how people learn.” It provides descriptions of “learning objectives that emphasize a feeling tone, an emotion, or a degree of acceptance or rejection”. It presents more difficult when used in in analysis and assessment design because of the size of the range of these objectives, i.e., “affective objectives vary from simple attention to selected phenomena to complex but internally consistent qualities of character and conscience,” however, Alonsabe (2009) felt that acquired an ubquitiuos aspect regard applicability writing that “much of the educative process needs to deal with assessment and measurement of students’ abilities in this domain,” while acknowledging that “processes in education today are aimed at developing the cognitive aspects of development and very little or no time is spent on the development of the affective domain”. So, following Alonsabe, (2009) one must ask if the learner exhibits comfort in distinguishing an AUX port from an CAT5e/6 port and the use of the hyperTerminal to ascertain the type of image used and the size of available memory, let alone the sequence of commands used to configure connectivity via the choice of IP addresses for each workstation and the selection of the correct protocol to make a LAN function. This moves from the “schooled” aspect to the “educated” aspect probing at the affective dimensions resident in the assessment of  the relationship between TSE and CSE exhibited by the learner. This simply refers to the fact that much of the processes in education today are aimed at developing the cognitive aspects of development and very little or no time is spent on the development of the affective domain. the affective domain “contains a large number of objectives in the literature expresses as interests, attitudes, appreciation, values, and emotional sets or biases.” An important part of the acquisition of the knowledge needed to qualify as a certified network associate is the exhibition of the values, attitudes and emotional set of others who practice the profession. The terms 1) “receiving,” i.e., “being aware of or sensitive to the existence of certain ideas, material, pr phenomena and being willing to tolerate them,” 2) “responding,” i.e., “committed in some small measure to the ideas, materials, or phenomena involved by actively responding to them,” 3) “valuing,” i.e., “willing to be perceived by others as valuing certain ideas, materials, or phenomena,” 4)”organization,” or relating “the value to those already held and bring into a harmonious and internally consistent philosophy,” and the demonstration of the possession of the value set used by the professional IT practitioner, i.e., “to act consistently in accordance with the values he or she has internalized.’ So, the requisite skill set then incorporates the display of observed behavior that exhibits the ability to differentiate, to respond, to comply, to relinquish, to theorize and formulate and to revise and resolve issue sets in the use of facts and knowledge in the application, analysis, synthesis and evaluation of an organization’s goals and mission statement into a functional and secure LAN and/or WAN.


Ortiz de Guinea & Webster(2011), sought to provide an empirically grounded discussion of the “the relation between individuals’ beliefs about their skills with a specific computer application and their capability assessments about a certain task carried out with that application,” That is, in general, how “TSE assessments relate to those of CSE.” Using the literature that described  “the heart of major theories of learning and memory and with SCT.”  In essence, their argument proceeded in this manner; a) “that TSE and CSE are more strongly related when both the task and the software are novel or when both the task and software are well-known;” and b) “, when the task is novel and the computer application is not (or vice versa), the relation between TSE and CSE is weakened.” The targeted phenomenon is “usefulness beliefs,” which follows from the work of Bandura, (1986) interpreted by  Ortiz de Guinea & Webster(2011), to mean an “employees’ self-efficacy beliefs, or their personal judgments of their abilities to apply individual skills to organize and execute particular behavior patterns, may relate strongly to both their tasks and the computer applications used to accomplish such tasks.” Citing Chou, 2001; Hsu, Wang, & Chiu, 2009) Ortiz de Guinea & Webster(2011) asserted that “constructs related to the task might have a critical role in determining users’ beliefs about the software and their abilities to use it properly.” Ortiz de Guinea & Webster(2011) also use Bandura, (1977, 1986, 2001) contention that “the importance of the task itself is highlighted within social cognitive theory (SCT), the theory within which the self-efficacy concept is grounded.” SCT, or social cognitive theory “emphasizes the importance of the context by defining human functioning as a triadic, dynamic, and reciprocal interaction of cognitive and personal factors, behaviors and the environment” Which is to argue that when a learned is tasked with the configuration of the varying hardware components of a network he/she must “size it up,” i.e., “computer self-efficacy assessments do not occur in a vacuum, but in the context of a task: we use computers to accomplish an activity in our work.”


To distinguished “task-specific self-efficacy beliefs” (TSE) from “computer-specific self-efficacy beliefs” (CSE)  the context in which they appear is critical, e.g., the learning lab’s topology, so assessment of the learner’s skills depend on the assumption that he/she has a working familiarity acquired from the training methods employed by the instructor. The work of  researchers,  Agarwal, Sambamurthy, & Stair, (2000); Johnson & Marakas,(2000); Olfman & Mandviwalle,(1994); and Yi & Davis, (2003) in this area is cited by Ortiz de Guinea & Webster(2011) to make the point that “the learning of software applications has been emphasized over what actually can be done with the applications for real tasks within job contexts.” Again from the literature Ortiz de Guinea & Webster(2011) reference Durndell & Haag, (2002); Johnson,( 2005); Wilfong,( 2006); Potosky,( 2002); Wang & Newlin, (2002); and Beckers & Schmidt,( 2001) noting that “research on computer and software training shows that CSE beliefs negatively influence computer anxiety and anger while they positively relate to learning performance and computer literacy.”Their point being that the “relation between TSE and CSE might suggest that capability assessments about a computer application are intrinsically associated and generalizable to assessments about the ability to perform a certain task,” which may or may not impact learning since the “individuals’ assessments of their skills with a specific computer application might be hindering their judgments about their capabilities to perform a specific task within their work environments, making the results of specific computer application assessments difficult to interpret.”  Ortiz de Guinea & Webster(2011) cite the work of  Bandura, (1986) and Gist & Mitchell, (1992) deducing that “individuals need to have a good understanding of a task and its context in order to make realistic self-efficacy estimations.” It follows then that from a design perspective that some attention should be directed to the affective domain as performance assessment proceeds to evaluate the psycho-motor skills involved in say, recognizing the connections between a router and ports on a switch, which is connected to which, visualizing the electron flow and which is connected to the work stations. This follows from Ortiz de Guinea & Webster(2011) citation of Beas & & Salanova, (2006)  and it bring focus to controls or manipulables, available to the designer. Furthermore, the deduction that aspect of the instructor when delivering  course content might obtain some criticality “given that CSE has a direct impact on individuals’ psychological well-being (an exploration of the relationship between TSE and CSE might provide further clues about how to indirectly increase CSE beliefs by manipulating the task context.” Thus, the provision of supervised lab time and availability of the lab to practice setting up networks and configuring the devices is “computer self-efficacy.” Ortiz de Guinea & Webster(2011) cite McFarland & Hamilton, (2006)  in this regard, “computer self-efficacy itself is an important construct since it appears to influence cognitive beliefs that are critical for decisions about computer use such as usefulness and focus the designer’s attention on the usefulness of CSE judgments as “the formation blocks of such cognitive beliefs about use,” as Ortiz de Guinea & Webster(2011) cite from Benbasat & Barki (2007) - Because we also explore the impact that CSE judgments might have on usefulness, we address calls for the examination of the formation blocks of such cognitive beliefs about use .


The argument above was constructed to advance the notion that the concept map of the skill set of the practicing CCNA resonates with the assessment item learning objectives and can be displayed as shown below:



CCNA: Cisco Certified Network Associate Study Guide, Deluxe Edition (Lammie, 2004)

Item number

Cognitive task 1

Cognitive task 2

Cognitive task 3

Cognitive task 4

Cognitive task 5

Cognitive task 6

Exam 640-801
















Planning & Design








Design a simple LAN using Cisco Technology








Design an IP addressing scheme to meet design requirements








Select an appropriate routing protocol based on user requirements








Design a simple internetwork using Cisco technology








Develop an access list to meet customer requirements








Implementation & Operation








Configure  routing protocols given user requirements

7, 8,9







Configure IP addresses, subnet masks, and gateway addresses on routers and hosts

1, 2,3,4,11,12,13,14







Configure a switch with VLANs and inter-switch communication

7, 10







Implement a LAN








Customize a switch configuration to meet specified network requirements

5, 10







Manage system image and device configuration files








Perform an initial configuration on a router








Perform an initial configuration on a switch








Implement access lists








Implement simple WAN protocols
















Utilize the OSI model as a guide for systematic network troubleshooting








Perform LAN and VLAN troubleshooting








Troubleshoot routing protocols








Troubleshoot IP addressing and host configuration

1, 2,3, 4







Troubleshoot a device as part of a working network








Troubleshoot an access list








Perform simple WAN troubleshooting
















Describe network communications using layered models








Describe the Spanning Tree process








Compare and contrast key characteristics of LAN environments








Evaluate the characteristics of routing protocols








Evaluate TCP/IP communication process and its associated protocols

5, 6,8,9







Describe the components of network devices








Evaluate rules for packet control








Evaluate key characteristics of WANs










Multiple Choice Questions – These questions are modeled based on a set of question found at

Rajaraman, V.,  Analysis and Design/Documents On Web Multiple Choice Questions, retrieved 10/24/2011 from http://nptel.iitm.ac.in/courses/Webcourse-contents/IISc-BANG/System%20Analysis%20and%20Design/pdf/Multiple_Choice_Questions/mcq_m11.pdf



1. Each computer connected to the internet must

a. be a Dell Latitude D410

b. have a unique IP address

c. be intranet compatible

d. have a modem connection


2.  IP address is currently

a. 84 bytes long

b. available in plenty

c. 6 bits long

d. not assigned as it is all used up

3.  IP addresses are converted to

a. a octal string

b. alphanumeric set

c. a hierarchy of domain names

d. a binary string

4.  Internet addresses must always have at least (i) a country name or organization type; (ii) internet service provider’s name; (iii) name of organization ; (iv) name of individual; or, (v) type of organization

a. i, ii, iii

b. ii, iii, iv

c. i, iii

d. ii, iii, iv, v


5.  Internet uses

a. Packet switching

b. locomotive switching

c. Telephone switching

d. Teletype switching

6. Internet data is broken up as

a. fiber optic length packets

b. variable length packets

c. not packetized

d. 64 bytes packets

7. Internet packet data structure consists of (i)source address; (ii) destination address ; (iii)serial number of packets; (iv)message bytes; or, (v)Control bits for error checking

(vi) Path identification bits

a. i, ii, iii

b. i, ii, iii, iv

c. i, ii, iii, iv, v

d. i, ii, iii, iv, v, vi


8.  The packets of an internet message

a. take a user predetermined path

b. take a path based on packet length

c. go along different paths based on path availability

d. take the shortest path from source to destination

9. The time taken by internet packets

a. can be user determined before transmission

b. may be different for different packets

c. is irrelevant for audio packets

d. depends on the protocol

10. By an intranet we mean

a. a LAN of an organization

b. a Wide Area Network connecting all branches of all destination organizations

c. a computer network used in partnerships only

d. a network connecting all computers of an organization and using the internet protocol



True or False questions – there are four items listed below, determine whether they are valid IP address and if not, state what is wrong with it:


11.T   F



12. T   F



  1. T  F


  1. T  F


Essay questions:

  1. Question
  2. As a network associate you must configure a NAT configuration using the IP addresses below. Remember you are dealing with a frame relay device and also a switch.
  3. Show the IP address of the subnets and hosts
  4. Configuration Information

Router name – Wonderland

Global Address Range – to

Local inside addresses – to

Number of inside hosts - 14


Figure extracted from, Graphical network simulator, Microcore and Tinycore 3.8.2 Qemu images, retrieved 10/24/2011 from http://www.gns3.net/content/microcore-and-tinycore-382-qemu-images

Write the configuration for both routers in the figure above:

  1. Explain how to move to the EXEC mode and why is it  necessary
  2. enter
    1. router name
    2.  enable-secret password
    3.  user password, and
    4. telnet password
  3. What IPv4  addresses will be used?
  4. be sure to address each interface and provide for RIPv4 to be the routing protocol

The literature emphasizes the influence that two independent variables, such as instructional decisions and content delivery can have on a dependent variable like student learning or better yet, quality of student learning. The focus should be understood to be the effectiveness of teaching. Gronlund & Waugh (2009, p. 10) support such a contention with the statement that  “assessment can aid the teacher in making various instructional decisions having a direct influence on student learning .” They goes further to advance the notion that the ways in which assessment can impact student learning in a positive manner include:  (1)  “providing students with short-term goals,” (2) “clarifying the types of tasks to be learned,” and (3) “providing feedback concerning their learning progress.” Conceptually, then the instructor would use feedback from assessments to  qualitatively alter the content delivery, the medium or manner of presentation and/or the frequency of assessment. Gronlund & Waugh (2009, p. 12) cited what are termed  “curriculum design errors,” and delineated them to include: a) “striving for learning outcomes that are unattainable by the students,” b) “using inappropriate materials,” and/or c) “using ineffective methods for bringing about the desired changes.” If, performed properly, the instructor can anticipate an qualitative impact involving the instructional process aspects that are related to “realistic” nature of ”instructional objectives,” the appropriateness of “methods and materials of instruction,” and the “sequencing of learning experiences”( Gronlund & Waugh, 2009, p. 12)

Brookfield and Preskill (2005, p. 17-18)note that “good evaluation are sometimes the result of teachers’ pandering to students’ prejudices,” and also that instructor popularity can be associated with qualitative dimension factors like  never challenging “students’ automatic ways of thinking and behaving,”  or allow[ing] learners “to work only within their preferred learning styles,” referring to such practices as  “a form of cognitive imprisonment.” Brookfield and Preskill (2005, p. 92) point to the insight obtained from “getting inside students’ heads,” i.e., “ start to see ourselves through students’ eyes,” where citing  Perry(1988) one encounters “ the ‘different worlds’ in the same classroom.” The use of gross categories such as race, sex, and ethnic groups don’t account for the differences to be found as individual learners reveal their perceptions of “the same actions and experience,” in “vastly different ways . The “symbolic meaning of instructor “actions” is the key to the “power dynamics of college classrooms,” and the clue to interpret the so-many “explanations” that ensue.

Among the tools and/or methods available to perform unit evaluation utas(n.d.) suggest SETL as a” valuable tool in assisting you to evaluate your unit and its teaching,” cautioning that “it is but one means to gather data to inform that evaluation.” It also is steeped in the positivistic tradition and all that that implies for meaningful interpretation. The instrument is to be administered “at or near the end of the teaching semester,” but acknowledging that such measurement can be obtained “throughout the teaching period as well.” The sample of respondents can include not only the learners but also “teaching colleagues, your own self-reflection, assessment records and the like.” Its purpose is to gather “summative information,” not formative information – i.e. “information able to be directly used for course improvement,” In that regard, for example, “items may be chosen to gather diagnostic feedback on a particular course innovation.” The writers from utas (n.d) also argue that in terms of ” ‘clues’ for course improvement, you may find the written comments returned to you with the statistical information more informative than the figures. “

Brookfield and Preskill (2005, p. 29-93) remark concerning “some kind of standardized evaluation form at the last meeting of a course, that the frequency interval may be problematic in two ways: 1)” it is summative, after the fact,” leaving “no opportunity” to perform corrective measures, and 2)they often resemble “satisfaction indexes-measures of how much people liked us.” The latter “satisfaction leave us in the dark regarding the dynamics and rhythms of their learning,” registering “little more than whether or not students happened to share the learning style that has shaped our teaching.” It is the contention of this writer that assessment, as a formative component of the learning process should occur within the in process of content delivery, as in the sense of Burke (2002) as cited in Sorenson and Reiner (2003, p. 14) “how the changes are planned, launched, or fully implemented, and once into implementation, sustained.” The last point made goes to mediums of content delivery and as such pertains to the concept of teaching presence of Brookfield and Preskill  (2005). Brookfield, S. D., and Preskill (2005) and/or the community of inquiry as delineated by Garrison & Vaughan ((2008). The latter is intended to connote the “interactive and reflective capabilities,” of the  CoI where “critical and creative thinking at a level well beyond the possibilities of the traditional lecture,” while the former is resonant with the recommended use of the “teacher-designed feedback forms,”(Angelo & Cross, 1993, p. 330-333), the “RSQC2,”(Angelo & Cross, 1993, p. 344-348) or, the “exam evaluations,” ,”(Angelo & Cross, 1993, p. 359-361) It was advanced elsewhere that the proposed assessment cycle should occur within the context of a blended classroom environment where the choice can be made to employ the computer to administer the assessment cycle contextualized by the other components of the learning process. Such posturing should not be allowed to mask the sometimes “deeper moral and political questions” that accrue to learner populations characterized as “students of different cultures or at different ability levels.” The abusive imposition of criteria emanating from voices in positions of power might foster the proliferation of the use threaten posturing, bully and intimidation tactics that cloud the learning process with fraudulent emotional artifacts. Perhaps this is enabled by the assumptions that drive some instructor, particularly minority member instructors when dealing with learner from their same racial/ethnic group, or from different racial/ethnic groups but it is a definite display of moral and political assumptions, unethical at best. Or, consider the encounter I had with a female from my own racial/ethnic group, where she commentary, in an off-hand manner, that “they really are in control.” (Brookfield and Preskill , 2005, p. 215-219) Brookfield and Preskill (2005, p. 215 - 219) remarked that “apparently technical glitches often mask deeper moral and political questions” may lead to labeling gaining “one a reputation as a troublemaker.” When reviewing the literature of reflective practice, Brookfield and Preskill (2005, p. 215-219) suggest that the instructor attempt to “discover and research the assumptions by which we work,” which is often impregnated with “considerations of [how] power permeate educational process,” or the “presence of hegemonic assumptions…embedded in the way we think about and practice teaching,” these and the vignettes and dramas incorporated in the “stories of how teachers live through the reflective process.”The latter may be  “what we thought was our own idiosyncratic difficulty” but  is “actually an example of a wider structural problem or cultural contradiction.” (Brookfield and Preskill (2005, p. 215 - 219) Brookfield and Preskill (2005, p.  219) suggest that “student evaluation forms …be redesigned to take account of the cognitive and emotional complexities involved in teachers becoming critically reflective” where attempts to” probe the extent to which students felt they had been stretched, challenged, questioned, and introduced to alternative perspectives.” Brookfield and Preskill (2005, p. 252-253) There are also critical incidents that are, in fact, “dilemmas and critical moments in…practice” that transfix, traumatize and otherwise scar the instructor emotionally that are irreparable. They constitute “events that had taken them by surprise or drained a good deal of emotional energy.” (Brookfield and Preskill (2005, p. 252-253)

Teacher-Designed Feedback Form

Directions: Please respond honestly and constructively to the questions below by circling the responses you most agree with and write brief comments.

  1. On the scale below, please rate the clarity of today’s session.

1          Totally unclear           

2          somewhat unclear                  

3          mostly clear

4          very clear    

5          extremely clear


  1. Was there a point, event or incident that made you feel disconnected from the substantive content

1          Totally disconnected

2          somewhat disconnect             

3          mostly connected

4          very connected   

5          extremely connected


  1. If, some degree of disconnect was detected where does its roots lie


  1. Personal issues you were experiencing
  2. Social class issues
  3. Racial/ethnic group issues
  4. Financial issues
  5. Other
  1. Overall, how interesting did you find today’s session?


1          totally boring

2          mostly boring

3          somewhat boring

4          somewhat interesting

5          extremely interesting

  1. Do the issues of diversity hold any interest for you personally

1          totally boring

2          mostly boring

3          somewhat boring

4          somewhat interesting

5          extremely interesting

  1. Would having a disabled learner in the class impair your learning


1 totally distract

2 mostly distracting

3 somewhat distracting

4 somewhat distracting

  1. extremely distracting
  1. How useful was today’s session in helping you learn the material


1          useless

2          Not very useful

3          somewhat useful

4          very useful

5          extremely useful

8. At any point were you feeling threatened, bullied or intimidated by something in or outside of the classroom

                        1          yes, very threatened from inside factor

                        2          yes,  vey threatened from outside factor

                        3          no, I was comfortable

                        4          somewhat comfortable

                        5          totally comfortable

9. Are you employed?

            1.  yes

            2. no

10. if no was your response to question 9, why do you think you are unemployed?

            1.  state of the economy

            2. race

            3. family of orientation

                        4. class or political orientation

            5. personality

11. Are you taking this class in an online venue or a face-to-face format?

            1. online venue

            2. face-to-face format

12. Which do you prefer?

            1. online

            2. face-to-face

13. Write a short statement to support your choice of venue





14. What is the best way to improve the economy?

            1. let the politicians decide

            2. focus of public works

            3. manipulation of tax structure

            4. manipulation of export/import policy

            5. labor intensive hiring

15. How can this course be improved?



16. What did you find most helpful about today’s’ session? (please provide one or two specific examples)



17. How could this specific class have been improved? (respond freely providing one or two specific examples)



















  • directions for the people administering the assessment and taking the assessment.
  • Consideration for learners with special needs should also be included in this section.

Accommodating Students With Special Needs


The most significant point to be made about how education takes place is that communication is essential. The quality of the education received appears to be related to the quality of the communications in the learning environment. This is no epiphany. The expectations held by the learner revolve around their perceptions of others and their class membership, rather than any racial/ethnic or gender group membership used to partially define them. This is corroborated by the remarks of Brookfield and Prescott(2005, p. 141),about the behavior of the working class learner at  the “conjunction of race and class,”  while caught in the throes of the so many dimensions of interpersonal communication and the issues of physical and psychological proximity where who speaks to who and what is deemed acceptable commentary are potentially pregnant “crises” of moment. Schwalm(1999) pointed out that “there any statistically significant differences in age, ethnicity, and gender that appear in conjunction with access to and general use of our computing facilities.” And, also that, if we are to recognize the differences if learning styles for the individual student, then we have to also acknowledge that “people use technology in different ways, for different purposes, and at different frequencies.” Regarding  policy implementation, Schwalm (1999) remarked that “colleges and departments certainly vary in their commitment to and emphasis upon the use of computing.” Then given such a matrix describing participation, Schwalm(1999) reasoned that “we might expect to see differences in technology use according to academic discipline, or between students in transfer and occupational programs, or according to learning style or number of hours enrolled.” Additionally, Schwalm(1999) projected that “ we could expect these differences to spread themselves equally across our student population, especially at colleges with large enrollments, significant technology installations, varied programs, and diverse populations. If access is then accepted as the word to describe the current state of things, one would ask questions about the ratio of students to computers, frequency of use by gender, race/ethnic group affiliation and social-economic status, individual student aggression, user sophistication


In the case of the hybrid course design, the commentary of Swail (2002) might be instructive. The argument is that the mesh between factors of this nature “has been a qualified success: more students from all backgrounds are attending college than ever before, but large gaps still exist in who goes where and who completes degree programs.” Empirical evidence supports the contention that “low-income and first-generation students, as well as students of color, are less likely to attend four-year institutions and to persist through degree completion than are more advantaged students.” Additionally, “demand for postsecondary study is at an all-time high for both students of traditional and nontraditional ages; for-profit and certificate-based providers are becoming more the norm than outliers.” From a policy perspective regarding demand, it should be noted that as a trend the use of distance education is “proliferating at all types of institutions; and higher education is becoming a global commodity traded across political and geographic boundaries.” The real situation finds that “colleges and universities are being pressed to serve a student body that is vastly different from only a few decades ago.” While this remark should be construed to include the adult learner, it is also meant to recognized what is occurring among “a dramatically different cohort of high school students” that is “preparing for postsecondary study.”  Swail (2002) noted alos notes the predominance of traditional "minority" groups now constitute the majority,” and the fact that  “this group of prospective students will be much less prepared for college than the current entering cohort.”  Swail (2002) interpreted Kipp(1998), to underscore the emerging trend that the “most rapid growth in the population will be among groups” that are: 1) "traditionally more likely to drop out of school,” 2)“less likely to enroll in college-preparatory course work,” 3) “less likely to graduate from high school,” 4) “less likely to enroll in college,” and 5)“least likely to persist to earn a baccalaureate degree."


It can be argued that learners who exhibit age, social class and gender differences present challenge to the course designer, as well as the range of difference found among the physically, mentally and linguistically challenged which constitute a set of variables that must be addressed in this age of accommodation and inclusion. Dalton (n.d.) cites the range of disabilities amenable to the use of assistive technologies in a classroom or at home as 1) “orthopedic impairments,” “visual impairments,” “augmentative communications,” “hearing impairments,” “learning disabilities,” “cognitive impairments,” “developmental disabilities,” “traumatic brain injury,” and “mild mental retardation.” The various scenarios encompass the following issue sets: access, control, independence, degrees of independence, visual impairment, augmented communication, hearing impairment, hearing impairment, learning disabilities, and color discrimination. In a word, the instructor can attempt to level the playing field because the pressure is to prove that it is possible to “assist the disabled to perform work on an equal basis with their non-disabled peers.”As a guide to making the choices for use in teaching and assessment, (Dalton, n.d.) cautions that attention should be placed on the factors “popularity,” “efficient use,” and productivity. kanaitsa (2010) wrote that “to teach students with disabilities, teachers must attend to the learning environment, as well as the beliefs and characteristics of the students.” kanaitsa (2010) advocated that “teachers should strive to effectively plan, deliver and evaluate instruction.” This article focuses on the strategies teachers should use.Teaching strategies for students with disabilities need to be carefully orchestrated to take into account the interactive nature of the teaching and learning process. Planning incorporates attention to the needs exhibited by the individual leaner and the instructional cycle. The latter refers to the time and effort a teacher uses “to determine the goals of instruction and learning, plan and deliver instruction and evaluate and modify instruction.” The instructional cycle should also address the aspect of evaluation, which “means continually examining data from both formal and informal assessments to determine student's knowledge.” (kanaitsa, 2010) kanaitsa (2010) suggested “some of the ways to examine” such as “by reading inventories, looking at the standardized tests, work samples and observations,” and also “types of evaluation measures a teacher can use are performance records, charts, progress graphs, portfolios, learning logs and journals.”


kanaitsa, Strategies for teaching students with disabilities, Teaching strategies, disabled students ,teaching students, 2010,retrieved 11/01/2011 from http://www.brighthub.com/education/special/articles/70535.aspx


The table below is extracted from his article concerning the range of options available to assist the hybrid classroom instructor during teaching and assessment activities.


Typical Assistive Technology Systems and Solutions Available


That the largest hurdle the physically impaired is the use of a computer. The use of the device for educational purposes represents the “largest orthopedic related problem in accessing the computer is inability to use fingers, hands or arms.” (Dalton, n.d.)

Voice recognition, a software package that combines with a microphone allows voice command, control, and text entry to the typical computer is the most popular solution currently. Dalton (n.d.) stated that “an appropriate voice recognition system should allow the user to operate a computer without ever touching the keyboard, “ allowing “text entry at a rate of 80 to 100 words per minute.”



The issues of have to do with guidance of and the use of fingers and selection of  the appropriate key


The solution set includes alternatives to allow the user adjustment of the input device to different angles .




This issue set goes to the effort and functional cost from an operational point of view and the effectiveness and efficiency involved in use of the computer. “

Specifically, this means the use of input devices, where the solution set includes e is the head-mounted mouse controllers, which can be combined with “devices with on-screen keyboards allow the user to control the computer with head movement.”Essentially this means “a person could control a computer with nothing more than a single switch.” While “slow,” it does in fact, “provide independence to the user.” (Dalton, n. d.)

degrees of independence

Issue is environmental control- The degree of independence afforded the learner is a credible consideration for the designer, which also implies a degree of control over the environment.


The use of “Environmental Control Units (ECU),” which are units designed to impact control and involvement of the person in his/her physical environment, in spite of orthopedic impairments. Thus the learner with a “severe disability can control his/her environment.” Their use enables access and control of  “ appliances, lights, drapes, television, stereo, VCR, door locks, air conditioning, etc.,” and also “any infrared device.” (Dalton, n. d.) The last options imply the use of wireless devices.


visual impairment

This also is an issue set addressed using a device to effect voice recognition and environmental control with the voice,” visual impairment can also be accommodated in ways to make a learner so challenged competitive in a blended and/or online learning environment.

There are two categories to be addressed, a “person with either low, or no vision.


Dalton’s (n. d.) argument is that “visual impairments are divided into these two, very distinct areas:”

  1. a) low vision users, where “screen enlargement software packages” can be used to “enlarge the screen up to more than sixteen times” or it is possible to use “large print labels for the keyboard,” or “large monitors” to “provide larger print to the low vision user.” And,
  2.  b) blind users, who may “need a screen review package and a speech synthesizer.”
  3. The former, the “screen review package,” is “a memory resident program that speaks the contents of any software program on the screen through a speech synthesizer.”
  4.  The “user can then hear what's on the screen spoken to him/her.”
  5. Another option is the “document reader,” which “consists of a flatbed scanner, software, a speech synthesizer, and a computer. “
  6. Dalton (n. d.) elucidated “when a document is placed on the scanner bed, the computer and software read the document and, through the use of voice synthesis, ‘speak’ it to the end user. “
  7. The last two alternatives are,
  8. “Braille readers” that “can read the screen by using a refreshable Braille output device which features ever-changing Braille cells,” and/or
  9. “pocket computers or organizers,” which “are available with voice output instead of the traditional monitor.”


augmented communication

  1. The issues here include access to computer technology and communication with others.
  1. there exists a “software package that causes the computer monitor to flash when the computer beeps which is extremely helpful to a hearing-impaired user,” according to Dalton (n. d.)
  2.  This or “TDD software” is available for use on computers.
  3.  The latter is “a device which allows the hearing- impaired to communicate via the telephone by allowing them to type their messages back and forth.”
  4.  When the telephones is equipped with “built-in amplification to make the handset louder”  effective communication is possible.
  5. There are also “personal amplifiers that fit in your shirt pocket with very directional microphones -are available.”


learning disabilities

Aside from the physical disabilities, learners from the target population might exhibit learning disabilities such as “dyslexia,” which might be effectively reduced using  “ of audio feedback systems. “


Dalton ( n.d.) argued also that “the use of screen review software and a voice synthesizer allows the dyslexic typist to hear what he/she is typing.”


color discrimination

The issue set here include those that relate to problems of color discrimination.

Dalton (n. d.) wrote that there “is a software package that automatically pronounces, spells, and defines each word entered from the computer keyboard”  and also that these issues “can be overcome by simply changing the color combination on the computer monitor”









Addressing the Diversity Issues

Diversity is the subject of much rhetoric centered on the topics of race/ethnicity, gender, culture, lifestyle, and geography, the needs of the disabled is also included. These are regarded as issue sets specific to the educational policy arena. When thinking about these in retrospective, Pallof and Pratt (2003, p. 39) commented regarding the online environment that the impact of its use “does not meld all students into one type-…all virtual students are not alike.” It is worthwhile to keep in mind the “unique needs” that are “created by culture, gender, life span, lifestyle, and geography.” Internet use has increased the “array of educational practices available to instructors,” which were enumerated by Joo(1999) while remarking on quality instruction, which he states that when offered must contend with a factor set that includes “remote students, reach underserved populations, respond to the diverse learning styles of and paces at which students learn, break down barriers of time and space, and give access to students of different languages and cultures.” Non-neutrality of technology is mentioned by  Pallof and Pratt(2003, p. 39) who cited McLoughlin (1999) stating that “technology is not neutral and that when culture and technology interact, either harmony or tension can be the result.” The points of contention, from the perspective of curriculum design according to Joo (1999)  are areas where cultural issues may come into play: content, multimedia, writing styles, writing structures, web design and the role of the student and instructor, and as such constitute markers that provide moments to reflect on how to inject quality into the course to be constructed. Shoebottom (2005), speaking to the difficulties encountered by the leaner for which English is a second language provides a useful explication of the differences between English and other major languages used around the world. His overview addresses the “nature of the English language.” From the point of view of assessment, the instructor must be aware that misspelling and incorrect grammar might be more prevalent for some learners because  “ there are several English dialects or varieties” that differ by grammar, vocabulary and pronunciation.” He adds that what we consider to be standard English is really that used by educated, middle-class people from the south of England.”

Shoebottom (2005),wrote that the alphabet used contains no “diacritics such as the umlaut in German or the circumflex in French.” The exception being “words imported from other languages,” which are “increasingly written without the diacritic, even in formal English.”  Shoebottom (2005) continued, “although the varieties of spoken English sound very different, all native-speakers use the same writing conventions.” In that respect, regarding phonology, Shoebottom (2005) points out that “Standard English has about 20 vowel sounds (12 pure vowels / 8 diphthongs) and about 24 consonant sounds.” It seems that “speakers of languages which have fewer vowel sounds often have difficulty making a distinction between words like sit / seat; pull / pool; food / foot” and “consonants clusters in many English words: strength; splash, chronicle. Non-native speakers may say such words with an extra vowel sound or leave out the syllable altogether.” Also, the pronunciation of English words such as this, thin, clothes, thirteenth, months” cause “problems for learners who do not need to use the tip of the tongue to produce words in their own language.” Other difficulties Shoebottom (2005) included were “attempting to produce spoken English that sounds natural,” the “unpredictability of English word stress,” and the “elision of weak syllables and the insertion of consonants (liaison).” Shoebottom (2005), goes on to cite the problems that arise from the grammatical structure of English, where the rules regarding the use of verbs and their tense, constitute the most significant problem for learners is the decision about “which tense (verb form) is required in English to correctly express the meaning that they wish to convey.” Related to tense is the consideration of the difficulties surrounding “the correct choice of modal.” This because “modal verbs are heavily used in English to convey shades of meaning in the areas of compulsion, ability, permission, possibility, hypothesis, etc,” so “learners have problems understanding and conveying difference.”Additionally, is the fact that “not only are verbs largely uninflected in English, but also nouns, pronouns and adjectives.” So, it seems  that “articles and other determiners never change their form.” Shoebottom (2005), also notes that “meaning in English is conveyed largely by word order,” yet when attempting to ferret meaning sentences becomes significantly more difficult when indirect objects or adverbials are added to the standard Subject-Verb-Object syntax. The observation that most learners of English have problems ordering words correctly in longer, more complex clauses,” is also a grammatical issue. These and also those whose native language doesn’t use articles encounter  another difficult dimension of the English language. Add to the above problems, the fact that English has the “largest vocabulary of any language,” about one million words. The use of English cognates and phrasal verbs are areas where the non-native speaker of English might encounter difficulty, according to Shoebottom(2005) .




  • Interpretation of Results section: This section includes
  • the scoring rubric to be used and

Performance Assessment

The use of instruments like discussions forums, journals and other artifact production are intended to relate the course’s learning outcomes by production of these assessment vehicles and evidence of understanding. The proposed scoring rubric is designed to implement those ends, what is shown below is a set of instruments that aligns with the objectives in a generic sense and can be used to evaluate the learners perform on a lab by lab and also course duration basis:

Lab Journals – compiled as a set of print screens of the result of each command in the proper sequence for each lab assigned. A copy of the IP routing configuration tasks, which involve global configuration, including a. Selection of  routing protocols, and b. Specification of networks, each router interface must be configured by specification of  the IP address/subnet mask and a. the selection of an IP routing protocol, which  involves the setting of both global & interface parameters. b. The selection of a routing protocol, a global tasks, 1) RIP - (a) Routing Information Protocol or (b) Distance-vector routing protocol or  2) IGRP- (a) Interior Gateway Routing Protocol or (b) Distance-vector routing protocol, should also be included.





the Asynchronous discussion or forum participation scoring rubric



Exceptional - (5 points)

Acceptable - (4 points)

Amateur - (3 points)

Unsatisfactory - (2 or less points)


The command line sequence is error-free and meets all of the specifications.

The command line sequence is error-free and produces correct results and displays them correctly. It also meets most of the other specifications.

The command line sequence is error-free, produces correct results but does not display them correctly.

The command line sequence is not error-free and it produces incorrect results.


The command line sequence is exceptional, in that is is well organized and very easy to follow.

The command line sequence is coherent and easy to read.

The command line sequence is not cogent nor readable, except  by someone who knows what it is supposed to be doing.

The command line sequence unorganized and difficult to decipher.


The command line sequence is reusable as a whole, as is each routine.

Most of the command line sequence is reusable in similar applications.

Some parts of the command line sequence is reusable in other applications.

The command line sequence is unorganized and therefore non-reusability.


The documentation is well written. It explains clearly what the command line sequence is accomplishing and how.

The documentation is composed of embedded comments and some simple header documentation that is somewhat useful in understanding the command line sequence.

The documentation is composed of comments embedded in the command line sequence with some simple header comments separating routines.

The documentation is composed of comments embedded in the command line sequence and does not help the reader understand the code.


The journal was delivered on time.

The journal was delivered within a week of the due date.

The journal was within 2 weeks of the due date.

The journal was more than 2 weeks overdue.


The command line sequence is extremely efficient without sacrificing readability and understanding.

The command line sequence is fairly efficient without sacrificing readability and understanding.

The command line sequence is brute force and/or unnecessarily long.

The command line sequence is unnecessarily long and appears to be patched together, or altogether inaccurate


In addition to the assessment of labs, the evaluation of search, research, and discovery prowess also can be assessed. This is what the Asynchronous discussion or forum participation rubric is designed to do, since it can be used to evaluate the submitted written documents, while the Lab Journals rubric (shown above) will be used for periodic and final artifacts. Perhaps the comparison of the instruments and the learning objectives has demonstrated the necessity of using both the assessment approaches in the blended environment to construct learning experiences that resonate with industrial and educational standards.


The challenge to the learner is to develop a strategy to assess not only the secondary sources such as topic-specific articles but also to assess wikis and blogs, and vendor websites dedicated to the issue at hand. This statement is intended to justify the suggested sources and/or websites listed for the student to use as background material for each topical area. The student is expected to develop perspective and articulate such in discussions that are exchanged and critiqued by their peers. It is also expected that the student will learn to solve some of the problems of IP addressing and equipment configuration and also an appreciation for the reports of experiences found in the reference articles. The use of the labs will provide an additional experiential dimension affording the learner with even more perspective from which to write about the similarities and differences between the protocols and level of networks used in the industrial sector. The rubric to be used is shown below:








Comparison Criteria



  1. Varieties of protocols at the WAN and LAN levels of networking
  2. Differences-
    1. Configuration of the router versus configuration of the switch and workstations.
    2. GUIs available to aid the configuration tasks
    3. Use of the command line-Use of the commands as procedural knowledge


Name: _____________________                    Instructor: ________________________


Date Submitted: _______________                Course Title: ______________________




Assignment completion







Lab Completion












Learner Product Production-Scenario Design








Asynchronous discussion or forum participation







Discussion Participation






Written response includes command(s), print screen, commentary relative to Windows






Articles assessment reported according to APA, relative to Windows






Cogency and coherent of written arguments







  • information on how you will protect the validity of the assessment. It also
  • explains why you have chosen either criterion- or norm-referencing for this assessment.

The context for this proposed criterion-referenced assessment is the blended classroom, where the learner can use the computer to access and acquire information on an “as needed” basis, examine the equipment rack to make distinction between the router, the switch, bridges and other equipment use to connect the workstations in the lab section of the classroom. The learner is also expected to have performed the assigned labs and also to have gained a working familiarity with the software, IOS hierarchy: Global configuration mode, interface configuration mode, router configuration mode and line configuration mode, Console, Aux port, hyperTerminal using the CLI, interfaces and ports: serial, Ethernet, token ring, asynchronous and FDDI, understand the memory types: RAM, ROM, Flash & NVRAM an how to use help. The other elements of the lab environment are the physical topology and the lab objectives.Bond(1996) when discussing both criterion- and norm- referenced tests, first recognized these as “two major groups.” Bond(1996) also further elucidated by stating that “the major reason for using a norm-referenced tests (NRT) is to classify students.” While she also asserts, citing Anastasi, (1988, p. 102)  that criterion-referenced tests (CRTs) determine "...what test takers can do and what they know, not how they compare to others.” Which is to say, “they report how well students are doing relative to a pre-determined performance level on a specified set of educational goals or outcomes included in the school, district, or state curriculum.” This is analogous to the weekly or periodic exams given during the term. They are normally based upon a set of learning outcomes delineated in a course curriculum and used as objectives in a lesson plan. Bond(1996) related that “educators or policy makers may choose to use a CRT when they wish to see how well students have learned the knowledge and skills which they are expected to have mastered.” In that sense, they “may be used as one piece of information to determine how well the student is learning the desired curriculum and how well the school is teaching that curriculum.” The norm-referenced test, according to Bond(1996), citing Stiggins (1994) is “ designed to highlight achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement from high achievers to low achievers. Schools often use them to rank order among the learners as “remedial” or “gifted.” In that sense, they “may be used as one piece of information to determine how well the student is learning the desired curriculum and how well the school is teaching that curriculum.”


Bond(1996) points out that The U.S. Congress, Office of Technology Assessment (1992) “defines a standardized test as one that uses uniform procedures for administration and scoring in order to assure that the results from different people are comparable,” and also that “any kind of test--from multiple choice to essays to oral examinations--can be standardized if uniform scoring and administration are used (p. 165).” She clarifies by stating that “the comparison of student scores is possible.” This is the intent of  “most national, state and district tests,” to be able to assert that “every score can be interpreted in a uniform manner for all students and schools.” At the collegiate level, the designer can collect assessment items that correlate with indicators of  reliability.This points in the direction of test content, which can be used to choose between the two types, since  the “content of an NRT test” is “selected according to how well it ranks students from high achievers to low,” while the “content of a CRT test” is used to “determined by how well it matches the learning outcomes deemed most important.” Of course, one might analyze test content and ask how thoroughly it corresponds to the overall content of the curriculum. Which is the point of the collection of assessment items included here they are in content, type and form of the genre of items used in the actual CCNA certification examination.


The assertion that a correspondence between “the content of the test” and “the content that is considered important to learn” is the overarching target is the design principle aspired to because “the CRT gives the student, the teacher, and other stakeholders more information about how much of the valued content has been learned than an NRT,”  a worthwhile point to consider, according to Bond(1996).


The following is adapted from: Popham, J. W. (1975). Educational evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.





To determine whether each student has achieved specific skills or concepts.

To find out how much students know before instruction begins and after it has finished.

To rank each student with respect to the
achievement of others in broad areas of knowledge.

To discriminate between high and low achievers.


Measures specific skills which make up a designated curriculum. These skills are identified by teachers and curriculum experts.

Each skill is expressed as an instructional objective.

Measures broad skill areas sampled from a variety of textbooks, syllabi, and the judgments of curriculum experts.


Each skill is tested by at least four items in order to obtain an adequate sample of student
performance and to minimize the effect of guessing.

The items which test any given skill are parallel in difficulty.

Each skill is usually tested by less than four items.

Items vary in difficulty.

Items are selected that discriminate between high
and low achievers.


Each individual is compared with a preset standard for acceptable achievement. The performance of other examinees is irrelevant.

A student's score is usually expressed as a percentage.

Student achievement is reported for individual skills.

Each individual is compared with other examinees and assigned a score--usually expressed as a percentile, a grade equivalent
score, or a stanine.

Student achievement is reported  for broad skill areas, although some norm-referenced tests do report student achievement for individual skills.


The following is adapted from: Popham, J. W. (1975). Educational evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, Inc., found in  Huitt, W. (1996), http://www.edpsycinteractive.org/topics/measeval/crnmref.html




Schaeffer(2007) wrote that “a recent variation of criterion-referenced testing is ‘standards-referenced testing’ or ‘standards based assessment.’" He also points out that “many states and districts have adopted content standards, which are "curriculum frameworks" that “describe what students should know and be able to do in different subjects at various grade levels.” Again, these serve as guides to the instructor when constructing periodic exams, along with “ performance standards” that “define how much of the content standards students should know to reach the ‘basic’ or ‘proficient’ or ‘advanced’ level in the subject area.” A large problem is that tests are then “based on the standards and the results are reported in terms of these ‘levels,’ which, of course, represent human judgment.” So when a state increases or changes performance standards the students either “continually have to know more to meet the same level,” or are victims of the direction of movement of the change relative the the generation of learners that their siblings or parents represent.” The issue here is quality, in that as Schaeffer(2007) states “standards”  are “supposed to cover the important knowledge and skills students should learn -- they define the ‘big picture.’ " It no secret that the components of the “big picture” has change enormously since the introduction of the use of the computer, which has highlighted significant differences among learners from different socio-economic classes. Schaeffer (2007) asserts that “state standards should be well-written and reasonable,”  but point to the fact that “some state standards have been criticized for including too much, for being too vague, for being ridiculously difficult, for undermining higher quality local curriculum and instruction, and for taking sides in educational and political controversies.” It follows that “flawed” or “limited” standards cripple the testing process and hamper “fair” assessment plans, yet they do impact “local curriculum and instruction.” That having been said, it also follows that quality aside, the match between the proposed assessment tool and the relevant standard as criticism is critical, from this writer’s point of view. When defending the contemporary approach to testing in the field of IT, one can see a decided preference for the use of multiple choice questions to evaluate the mastery of conceptual and factual material, and also the use of extended performance assessment to determine proficiency at the hots level, i.e., higher order thinking skills. Schaeffer (2007) asserts that the relevant question is “are all the important parts of the standards measured by the test? “  because “often, many important topics or skills are not assessed.” He accords the reason as that most “state exams still rely almost entirely on multiple-choice and short-answer questions.” Yet, he offers the observation that “such tests cannot measure many important kinds of learning, such as the ability to conduct and report”, to analyze and interpret information to present a reasonable explanation of the caus[ality], or “to engage in serious discussion or make a public presentation.” (see fact sheet on multiple-choice tests), all of which may or may not be relevant to a real skills assessment.


Gronlund & Waugh (2009, p. 17) accent the “realism of assessment tasks,” i.e.,” the extent to which they simulate performance in the real world ”. To that end, they stress also the fact that “extended performance assessment,” as an alternative to selection among choices is ‘ high in realism because it attempts to simulate performance in the real world.” This resonates well with the attempt to simulate activity in the higher order thinking range, i.e., “analysis, synthesis and evaluation” This they argue is a close approximation of the real world and the complexity of the problems there that must be addressed . (Svinicki & McKeachie, 2007, p. 17) as an alternative choice they “typically involve multiple learning outcomes, the integration of ideas and skills from a variety of sources, the availability of various possible solutions, and the need for multiple criteria for evaluating the results,” as well as “complex movement patterns that are guided by the integration of information and specific skills from various learning experiences…e.g.,…repairing electronic equipment.”

Validity as an Issue

The criticality of this issue set goes to the question about the “information value of [assigned] grades” l; as they relate to “the methods used to evaluate learning,” according to Svinicki & McKeachie, (2007, p. 128) Regarding the issue of utility, they assert that “for grades to be truly useful, they need to be based on what the measurement field refers to as valid and reliable methods.” They define a “valid measure ,” as an assessment  that “measure[s] what they say they measure.” They provide the criterion of “what went into the grade calculation.” Although behavioral observations apply to the issue of “diligence,” (Svinicki & McKeachie, ( 2007,  p. 129), while possible “surrogates for… personal responsibility, or professional behavior,” they are not “not valid measures of what a student has learned.” They argued for the exclusion of such practice. Regarding  “reliability,” it was asserted that it goes to whether or not the assessment “ produces fairly consistent results wither across time or across multiple graders.”As such, they define a “reliable measure,” as one about which it can be said that “ everyone’s grade indicates a very specific performance, and all individuals whose performance is the same get the same grade” In IT it can successfully be argued that the assessments are conducted versus “an absolute standard,” which resonates with (Svinicki & McKeachie, 2007, p. 130) citation of Travers(1950) as a proposed rubric:

  1. All major and minor goals are achieved
  2. All major goals achieved; some minor ones not
  3. All major goals achieved; many minor ones not
  4. A few major goals achieved, but student is not prepared for advanced work
  5. Or f. none of the major goals achieved

Such a position advocates the use of a criterion-based system, and as Svinicki & McKeachie (2007, p. 131) state “avoids the detrimental effects of grading student performance relative to one another.” The philosophical issue here, according to Svinicki & McKeachie, (2007, p. 131) is “what do grades mean?” Where purpose can be construed to be “to identify the “best’ student versus “ to indicate what each student has achieved.” They also mention situation-specific issues like when ”intra-group comparison” is needed, i.e., allocation of “limited resources or awards to only the best of a group.” They also posed the critical question,” What if the skills needed for the next class or on the job are so critical that failure to achieve an absolute level of competence could have dire consequence?”





















Properly considered this paper represents a compilation of some more significant literature on the topic of assessment in the arena of adult education. It spans the range to include summative, formative, performance, classroom and affective. The focus has been on improvement of mastery of the learning objectives as they relate to the practice of computer networking and ascertaining the CCNA certification as a knowledge, professional technician. The effort has been extended to making assessment of this knowledge attainable to all learners, with respect to the issues of diversity, those challenged by the use of the English language, or with significant hurdles in the form of disabilities, racial/ethnic group membership, class, customs, conventions and cultural constraint, with an eye on how to evaluate the current state of things regarding the learner, the course content, and delivery mechanisms in use.

This document and those that precede it were produced for the mind and the ear of the discerning most scrupulous of critiques. Its intent is to provide fodder to sustain the argument of mastery of the conceptual base of teaching and assessment at the collegiate level. This is not the work of one given to trivial and trifling pursuits and misused pleasures but rather a reflection of the ste of values that cherish the devotion and attention provisioned over the years by both daughtehrs, mate and friends, with the most sincere intention of securing employment where it was once denied. Regarding best practices, the American Association for Higher Education's states that “Its effective practice, then, begins with and enacts a vision of the kinds of learning we most value for students and strive to help them achieve.” This attention to the affective domain of the it profession was addressed, at length. It should be construed to mean “educational values should drive not only what we choose to assess but also how we do so.” This is another way of saying the designer must attend to the matter of the alignment of “ educational mission and values,” as things to truly care about in a process targeted at improvement in all aspects efficient and effective. Not to be remiss, American Association of Higher Education and the Fund for the Improvement of Postsecondary Education (FIPSE) highly recommend the following as best practices regarding adult education and higher education. Assessment should then be 1)”linked to” the learning unit or module’s goals and mission, 2) they should also be “focused on improving student learning and academic programs, 3) “faculty driven,” 4) “embedded in courses, 5) “based on multiple measures, with emphasis on direct measures,” 6) visible across the span of the curriculum, not just at the beginning and/or end of the student's program,” and 7)”* continuous, dynamic, and systematic.”The former wrote for public consumption that “assessment is most effective when it reflects an understanding of learning as multidimensional, integrated, and revealed in performance over time.” It requires a demonstration of “not only what students know but what they can do with what they know;” involving “not only knowledge and abilities but values, attitudes, and habits of mind that affect both academic success and performance beyond the classroom.” It is then a reflection of understanding[s] that calls for ,” with the aim of. Compiling a “more complete and accurate picture of learning, and therefore firmer bases for improving our students' educational experience.”


The Wikipedia (2008) opens the discussion of best practices by first defining them  “as the most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people”. This pertains the adult education or continuing education arena, where one must attend to the fact that each learner is “very unique in the sense that they come from diverse background.” This commentary alludes to the cited reference to the work of Light (2001) who “attributes great college success in adult education to human relationships,” where “ close relationship between adult students and their advisors´ is thought to be “one of the factors that make adult education work effectively because it goes beyond just choosing of courses to a plan that enhances individual growth.” Hence the first of the recommend practices is to  create an “environment where they are respected and they feel that people care about them.”  Hansman (2004)advises that a “good plan requires input from all stakeholders for it to be successful.” Caffarella (2002) offers this thought for inclusion “the planning context and parameters of the program primarily affect which components are adopted.” These combined with a rigorous, systemic and staunch attentiveness to ethical practice in teaching should preface decisions that “include community and societal beliefs rather than the planner’s personal beliefs.” With respect to implementation and course design, Wikipedia wrote that duly considered it is “very important when delivering any course material,” and citing  Merrill (2004), they emphasize that “effective course design uses a systematic approach to planning all the elements of a course” that include instructional and delivery practice.”

Where the learning environment’s physical accoutrement includes or allows for a blneded and/or an online presentation of content, “it is at that junction when technology, pedagogy, and leadership meet that the greatest opportunities for positive change appear to lie,” is the thing to remember, according to Fadel (2010). Fadel (2010) remarked about the trends and challenges of the delivery of online content that from “reflection on the case studies,” there are five groups that matter: 1) “new structures and funding models,” 2) “relationships among teachers, pupils, and parents,” 3) “relationships among education, technology, and innovation,” 4) “more sophisticated blended approaches to learning,” and “more sophisticated forms of assessment and evaluational.” These were just a set of partial sets that were followed by the telling comment that ‘”it is critical also to look at the underlying conditions that provide the foundations of success, including culture, relationships, and approach.” While the quest has been for what can be termed an association between variables and the quality of the education provides, where the definition of an “association,” as “any relationship between two measured quantities that renders them statistically dependent.” Wikipedia wrote the term " ‘association’ refers broadly to any such relationship, whereas the narrower term "correlation" refers to a linear relationship between two quantities.” It should be remembered that the specification of probable correlation is what was posited here, as well as the significance of the “outliers.” At any rate when contextual factors like culture are considered, the preference is for “a culture of responsibility and trust,” “good relationships“  and “collaboration,” according to Fadel(2010).



Alade & Buzzetto (2006) cited Martell and Calderon (2005) noting that assessment is an ongoing process that involves planning, discussion, consensus building, reflection, measuring, analyzing, and improving based on the data and artifacts gathered about a learning objective.” The point made here goes to the frequency of assessment to provide a base from which to reflect upon, plan and implement change. Since it requires as Alade & Buzzetto (2006) referring to the work of Orlich, Harder, Callahan & Gibson, (2004) “assessment - encompasses a range of activities including testing, performances, project ratings, and observations ,” while the latter can be cursory and unplanned, it can also be part of a intentional process of self-report from the learners. Alade & Buzzetto (2006) reported from Bennett, (2002) concerning the “use of information technologies and e-learning strategies” that they “can provide an efficient and effective means of assessing teaching and learning effectiveness by supporting traditional, authentic, and alternative assessment protocols.” While realness has been a strived for benchmach of this report, it is worthwhile to factor the contribution of  Vendlinski and Stevens (2002), as reported by Alade & Buzzetto (2006) That “technology offers new measures for assessing learning that will yield rich sources of data and expand the ways in which educators understand both learning mastery, and teaching effectiveness. Encompass ” The various technologies alluded to might encompass “pre and post testing, diagnostic analysis, student tracking, rubric use, the support and delivery of authentic assessment through project based learning, artifact collection, and data,’ all of which are to be found in the preceding commentary. The proposed set of assessment vehicles presupposes the purpose of  “aggregation and analysis” and can be viewed as an argument in support of a program which ‘includes computerized longitudinal testing, online diagnostic testing, competitive networked simulations, rubrics, student discussion transcripts, taped presentations, and electronic portfolios.”


So, in an effort to provision, enhance and nurture the discourse created by this structured excursion through the literature on assessment in adult education, the words of Alade & Buzzetto (2006) citing Swearington,( n.d.) and Love & Cooper, (2004) –“good assessment serves multiple objectives and benefits a number of stakeholders.” Thye also cite from Dietal, Herman, and Knuth (1991), to the effect that “ assessment provides an accurate measure of student performance to enable teachers, administrators, and other key decision makers to make effective decisions.” Then from Kellough and Kellough (1999), Alade & Buzzetto (2006)  identified seven purposes of assessment: a) “improve student learning;” b) “identify students’ strengths and weaknesses;” c) “review, assess, and improve the effectiveness of different teaching strategies;” d) “review, assess, and improve the effectiveness of curricular programs;” e) “improve teaching effectiveness;” f) “provide useful administrative data that will expedite decision making;” and g) “to communicate with stakeholders.” This document has identified the specific traits associated with the study for and attainment of the CCNA certification, while specifying the learning objectives. It has also aligned them with Bloom’s taxonomy of educational objectives and the developmental concepts mention there in. But it is in the collection and analysis of the data obtained from the use of the computer where the stoutest arguments can be constructed. Perhaps the greatest advantage obtained using computerized assessment is the efficiency of the grading and subsequent analysis and interpretation of the results. Gronlund & Waugh ( 2009, p. 220) wrote modification of multiple choice items, the addition of performance tasks and the provision for “criterion-referenced interpretation” were specific things added by using a standardized achievement test battery. Gronlund & Waugh ( 2009, p. 220) also state that one of the best known methods of reporting criterion-referenced test score is the “percentage-correct score,” where the product is the report of the “percentage of test items in a test, or subtest” are “answered correctly.” When aggregated over districts and/or regions they can be compared to the “national norm” and support argument for or against any proposed change to an existing curriculum. On an individual basis, “scores can be presented by clusters of items representing a content area, skill, or objective.” (Gronlund & Waugh, 2009, p. 220) they scores when calculated can also be compared to any proposed set of standards or benchmarks previously specified. (Gronlund & Waugh,  2009, p. 220) Regarding criterion-referenced interpretations of standardized tests, Gronlund & Waugh,(2009, p. 222) stated that they “require a check on how well the objectives, content, and skills of the test match the local instructional program.” Additionally, “whether the construction of the test favors criterion-referenced interpretation;” “ whether there is a sufficient number of test items for each type of interpretation;” and “how the performance standards are determined.”











Alade, A. J., & Buzzetto, N. A., Best Practices in e-Assessment,

Journal of Information Technology Education Volume 5, 2006, University of Maryland Eastern Shore, retrieved 12/03/2011 from Nabuzzetto-more@umes.edu ajadlade@umes.edu


Alonsabe , O., C., Assessment in the Affective Domain, Educational assessment, 2009, retrieved 12/05/2011 from http://olga-assessment.blogspot.com/2009/05/assessment-in-affective-domain.html

Angelo, T. A., & Cross, K. P., Techniques for assessing learner reactions to instruction, Classroom assessment techniques: A handbook for college teachers, 2nd Ed., 1993, Jossey Bass

Best Practices in Academic Assessment, The American association for higher education, retrieved 12/06/2011 from,ee218e08&icp=1&.intl=us&sig=r29loy_6uCbwTNoNPRjdOw--


Brookfield, S. D., and Preskill, S., Discussion as a way of teaching: Tools and techniques for democratic classrooms, 2nd Ed., 2005, Jossey Bass


Fadel, C., Best practices in education Technology, retrieved 12/03/2011 from,aae08ff4&icp=1&.intl=us&sig=EuYcWU1bjBTTilaz9xhubQ--Garrison, D. R., & Vaughan, N. D., Blended learning in higher education: Framework, principles, and guidelines

Graphical network simulator, Microcore and Tinycore 3.8.2 Qemu images, retrieved 10/24/2011 from http://www.gns3.net/content/microcore-and-tinycore-382-qemu-images


Gronlund, N. E., & Waugh, C. K., Assessment of student achievement, 2009, Pearson

Hirschheim, R., Klein, H. K., and Lyytinen, K., Exploring the intellectual structures of information systems development: A social action theoretic analysis  Accounting, Management and Information Technologies, Volume 6, Issues 1-2, January-June 1996, Pages 1-64, retrieved 12/05/2011 from

Lammie, T., CCNA; Cisco certified network associate study guide, Deluxe Ed., Exam 640-801, 2004, Wiley publishing, Inc.

Ortiz de Guinea, A., & Webster, J., Are we talking about the task or the computer? An examination of the associated domains of task-specific and computer self-efficacies, Computers in Human Behavior, Volume 27 Issue 2, March, 2011 retrieved 12/05/2011 from http://dl.acm.org/citation.cfm?id=1937632

Palloff, R. M., & Pratt, K., Building online learning communities: Effective strategies for the virtual classroom, 2nd Ed., 2007, Jossey Bass

Rajaraman, V.,  Analysis and Design/Documents On Web Multiple Choice Questions, retrieved 10/24/2011fromhttp://nptel.iitm.ac.in/courses/Webcourse-contents/IISc-BANG/System%20Analysis%20and%20Design/pdf/Multiple_Choice_Questions/mcq_m11.pdf

Saisi, P.,Best Practices in Adult and Continuing Education, 2008, retrieved 12/03/2011 from http://adulteducation.wikibook.us/index.php?title=Best_Practices_in_Adult_and_Continuing_Education

Shank, P. & Sitze, A., making sense of online learning: A guide for beginners and the truly skeptical, 2004, John Wiley & sons/Pfeiffer

Sorenson, D. L., & Reiner, C., Charting seas of online student ratings of instruction, in Editor Sorenson, D. L., & Johnson, T. D., Online student Ratings of instruction: New directions for teaching and learning, Hoboken, N. J. : J. Wiley & Sons

Using your SETL results: the place of SETL in evaluation, Student evaluation of teaching and learning, retrieved 11/26/2011 from http://www.studentcentre.utas.edu.au/setl/Staff/usingresults.html