+1(316)4441378

+44-141-628-6690

Instructional Goals and Objectives

Instructional Goals and Objectives

Susan is preparing instruction for hospital employees. Everyone at the hospital must follow the hand-washing procedure that hospital administration has identified as most effective. A need for instruction has been identified, and Susan knows the learners well. “What I need to do now,” thinks Susan, “is determine exactly the goals and objectives for this instruction so we can determine the right activities and assessments to ensure a healthy and safe work environment.”

At a northern university, Brian has been given the task of developing instruction on how to protect human subjects involved in research studies. The university president feels that the faculty need more information about how study participants are to be treated and the processes involved in getting the institution’s approval of the research procedures. Like Susan, Brian understands the need for the instruction and is knowledgeable about the learners. Brian tells his instructional media production team: “The instruction needs to begin with an explanation its purpose. Faculty are going to want to know why they are doing this and what they are expected to accomplish.”
GUIDING QUESTIONS

• What is an instructional goal?
• What is an instructional objective?
• How do instructional goals and objectives differ?
• How does one begin writing or identifying instructional goals?
• How does one create instructional objectives?

The development of instructional goals and objectives depends on the type and purpose of the instruction one is creating. Creating instructions on how to fly a fighter jet requires specific objectives that have demonstrable outcomes. However, creating instruction about the history of flight may not require objectives written according to a systems approach.

Susan considers how best to approach hand-washing instructions. The subject matter experts have provided her with a set of specific steps a person must follow to effectively wash one’s hands. When she writes the goals and objectives for the instruction, Susan knows the objectives will have to describe how well the learners complete the steps involved in the hand-washing procedure because she will need to be able to point out specific actions taken by the learners that indicate they understand correct hand-washing methods.

Brian considers how best to explain the purpose of the instruction on treating human subjects. Because the instruction is a presentation of the reasoning behind creating safeguards for human subjects and the processes in place that protect people involved in research studies, he realizes that the goal will be relatively easy to state, but there will probably be few if any immediate and overt changes in the learners’ behaviors as a result of this instruction.
POPULAR APPROACHES TO SETTING GOALS AND OBJECTIVES

The approach to developing learning objectives most often used by instructional designers was created by Robert Mager. Mager’s approach is designed to generate performance objectives and is inextricably connected to behavioristic instructional design applications. Mager recommends using three components in writing learning objectives:

1. Action Identify the action the learner will take when he or she has achieved the objective.
2. Condition Describe the relevant conditions under which the learner will act.
3. Criterion Specify how well the learner must perform the action.

According to Mager, a learning objective is “a description of a performance you want learners to be able to exhibit before you consider them competent” (1984, p. 3). Dick, Carey, and Carey (2009) and Smaldino, Lowther, and Russell (2008) take similar approaches, focusing on the actions, conditions, and criteria.

Dick, Carey, and Carey suggest that goals and objectives are determined through one of two approaches. They are either prescribed by a subject matter expert (SME) or they are determined by a performance technology approach. SMEs may be called on to work with the instructional designer to articulate the appropriate goals and objectives for an instructional design project. Instead of having an SME prescribe goals and objectives, a performance technology approach derives them from the data gathered during a needs analysis. According to Dick, Carey, and Carey, once the goals are established, a subordinate skills analysis should be conducted in order to determine the specific performance objectives for the instruction.

SETTING GOALS

Goals describe the intention of the instruction. According to Mager, “A goal is a statement describing a broad or abstract intent, state or condition,” (1984, p. 33). In general, goals cannot be directly perceived. For example, the statement “Students will appreciate classical music” is a very reasonable instructional goal, but it does not have specific, observable features. The students may be listening, but how does one determine if they are appreciative?

Regardless of any lack of visible evidence, setting goals for instruction is a critically important part of the instructional design process. It is often relatively easy to write goals if one is starting with a “clean slate” situation—one in which no instructional interventions have been attempted and no past practices have been established. However, one is rarely offered a completely clean slate when designing instruction. Often, a number of established instructional interventions are in place, and people may have lost sight of the original goals for this instruction. Instructional designers almost always work within an organizational structure with its own idiosyncratic demands. More than likely, tradition, politics, and the predilections of decision-makers will be critical factors in determining the goals for any instructional design project (Dick, Carey & Carey, 2009).
Professionals in Practice

When I taught 8th-grade English, I traditionally ended the year by having my classes read Shakespeare’s As You Like It. The goal was to build students’ confidence with a difficult text (we did this through activities that included “translating” passages into modern language and acting out and illustrating scenes from the play). One year, I had a particularly difficult group of students; I decided not to read As You Like It with the group because I felt the goal of building their confidence would not be met—I felt that trying to do something this ambitious with this particular group might actually have the opposite effect.

When the students found out I was not planning to read Shakespeare with them in the spring, they expressed deep disappointment. I learned from them that reading a Shakespeare play in 8th grade was now considered a rite of passage by the students who had me as a teacher before. Regardless of the fact that I did not feel this activity was an appropriate way to address one of my goals for this particular group of students, I wound up doing it anyway because the students would have felt “cheated” if I had not. I realized that one goal for this activity was not something that I had created but that had grown out of the student community.

—Abbie Brown

former teacher at George Washington Middle School

Ridgewood, New Jersey

In the example FAST chart in Figure 6.3, “Maintain Health” would be the goal derived from the action of brushing one’s teeth. To some, this may be obvious, but to many, the fact that children are taught to brush their teeth is so ingrained as a given activity that they lose sight of the fact that the larger goal is maintaining a healthy body. The FAST chart technique is particularly helpful when there is an established or expected set of instructional activities that are part of standard practice, where you as the instructional designer are trying to determine why those activities are important.

TRANSLATING GOALS INTO OBJECTIVES

Just as a goal is the intention of the instruction, an objective is the intended outcome of each instructional activity. The intended outcome can be described as what the learner will be able to do upon completing the instruction. Determining the intended outcome in advance is an important step in the design process if student success will ultimately be measured against some standard or specific evaluation criteria. Clearly stated instructional objectives also make it easier for a design team to produce instruction that meets with the approval of everyone involved. Smith and Ragan (2005) write:

Objectives are valuable to all members of the learning system. They aid the designer since they provide a focus of the instruction, guiding the designer in making decisions about what content should be included, what strategy should be used, and how students should be evaluated. The specification of clear objectives is especially critical when a number of individuals—such as designers, content experts, graphic artists, and programmers—are working together to produce instruction. In these situations, learning objectives serve as a concrete focus of communication. (p. 97)

It is critically important to keep in mind that a well-stated instructional objective describes an observable or measurable action performed by the learner. The objective should describe what the learner might be observed doing that he or she could not do prior to the instruction. A typical “rookie mistake” is to write an objective that is actually a description of the instructional activity. “Students will view a 30-minute videotape on the basics of photography” is not a well-written instructional objective; a better objective would be: “At the end of viewing a 30-minute videotape on the basics of photography, students will demonstrate their ability to choose the correct f-stop setting for a variety of lighting conditions.”

Another theoretical construct that is popular among instructional designers and useful for determining instructional objectives is Gagne’s hierarchy of intellectual skills (Gagne, 1985; Zook, 2001). Gagne takes an approach to the domains of instruction similar to that of Bloom’s taxonomy, but there are important differences. Gagne divides what can be learned into three categories: declarative knowledge (verbal information), procedural knowledge (motor skills, intellectual skills and cognitive strategies), and affective knowledge (attitudes). Gagne states that there are five possible types of learning outcome: intellectual skill, cognitive strategy, verbal information, motor skill, and attitude. Gagne’s hierarchy of intellectual skills (the skills most often addressed through instruction) states that there is a progression that can be followed to bring a student to the point of being able to solve problems on his or her own. The four steps in this progression are discrimination, defined concept, rule or principle, and problem-solving (see Figure 6.5).
FIGURE 6.5 Gagne’s Hierarchy of Intellectual Skills.

In this hierarchy, you start at the bottom (discrimination) and work up to problem solving.

Traditionally in instructional design, a goal is a general statement of the educator’s intentions, and an objective is a specific instance of the goal in action. If the goal is “Students will develop social skills,” specific objectives may include: “Students will say ‘please’ and ‘thank you’ at the appropriate times” or “Students will hold the door for each other as they enter and exit the building.”

Articulating instructional goals is important; they are the written embodiment of the intention behind the instructional intervention. Using instructional goals to create instructional objectives can be equally important, particularly if the effectiveness of the instruction and the achievement of the learners will be tested by measuring them against a set of standards or a list of specific evaluation criteria. Well-stated objectives help everyone involved in creating and supporting the instructional event by providing focus for the instructional activities.

EVALUATING THE SUCCESS OF SETTING GOALS AND OBJECTIVE SPECIFICATIONS

The specified instructional goals and objectives should be supported by the data gathered during learner and task analysis. The question to answer about the goals and objectives is, “Do these goals and objectives direct us to create instruction that supports the identified population of learners in gaining skill with the tasks that have been identified?” In traditional instructional design, it is important to take some time to consider whether the goals and objectives developed have truly grown out of the learner and task analyses.

GOALS AND OBJECTIVES AND THE INSTRUCTIONAL DESIGN PROCESS

Setting goals and objectives is a critically important part of the instructional design process. No matter which approach you take, setting goals and objectives should help you answer the following questions:

• What is the overall purpose of the instructional activity?
• Is the intention of the instruction accurately reflected in the goals and objectives?
• Have the traditions, politics, and predilections of the organization been accounted for when developing the instructional goals? Do the goals and objectives match the instructional intent regardless of the organization’s influence?
• Are there any specific, observable behaviors the learners should exhibit after they have completed the instruction?
• What evaluation strategies will be used to determine if the instructional goals and objectives are appropriate?

Summary

Goals and objectives define the intention of the instruction. An instructional goal is a general statement about the ultimate intention of the instruction. An instructional objective is a more specific statement about how and to what degree the instruction will affect the learners. The objective should describe an action taken by the learner at the conclusion of the instructional event that can be empirically measured by an observer.

Goals must be articulated in order to create instruction. However, objectives are subordinate to goals and may not be necessary to an instructional design. Objectives are critically important if the learners are to be evaluated based on standards or specific criteria. If learners will not be evaluated in this manner—for example, if the instruction is intended to foster creativity or critical thinking—then writing specific instructional objectives may actually be an inappropriate step for the instructional designer.

Popular approaches to writing goals and objectives include Mager’s development of performance objectives (determining the action, condition, and criterion); Dick and Carey’s dual approaches of determining goals and objectives by either consulting subject matter experts or taking a performance technology approach (deriving goals and objectives from the data gathered during needs and task analysis); Heinich, Molenda, Russell, and Smaldino’s ABCD approach (audience, behavior, conditions, degree); and Morrison, Ross, and Kemp’s terminal and enabling objectives.

It is important for novice instructional designers to realize that they will most often be creating instruction for organizations that have their own traditions and political necessities; instructional objectives may be well-articulated, while the instructional goals may not be written down. Missing or poorly articulated instructional goals may be determined by using a FAST chart, working from a specific instructional objective back to a general goal.

Writing instructional objectives can be facilitated through the use of hierarchies or taxonomies that define the types and levels of instructional outcomes. Bloom’s taxonomy and Gagne’s hierarchy of intellectual skills are reference tools popular among educators.

In evaluating the success in writing instructional goals and objectives, one critically important question to consider is whether the goals and objectives lead to the creation of instruction that is appropriate and effective for the learners. Constant comparison of the goals to the objectives (and vice versa) can help make the final instructional product one that is truly useful.

Connecting Process to Practice

1. After reading about Brian’s instructional design challenge, do you think he did the right thing by creating objectives that are not performance objectives? Could Brian have written performance objectives for the instruction on human subjects?

2. You have been asked to create a six-week unit on writing poetry for a high school English class. How would you go about determining the appropriate goals and objectives for this?

3. Using the ABCD approach, write two performance objectives for this goal: “Students will understand the importance of making healthy snack choices.”

4. You are the instructional designer in the Human Resources department of a mid-sized corporation. You have been assigned the task of creating instruction that addresses the appropriate use of corporate expense accounts. What factors may affect the goals you set for this instruction?

5. Your employer wants to be sure that everyone in the organization knows CPR. What goals might you derive for instruction that supports this?

6. You are teaching a group of ten-year-olds how to play soccer. You want them to improve their ball-passing skills. What goals and objectives might you set for your instruction?

References

Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. &Krathwohl, D. R. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive Domain. New York: David McKay.

Dick, W., Carey, L. & Carey, J. O. (2009). The systematic design of instruction, 7th edition. Columbus, OH: Allyn& Bacon.

Gagne, R. (1985) The conditions of learning, 4th edition. Philadelphia: Holt, Rinehart and Winston.

Mager, R. (1984). Goal analysis. Belmont, CA: Lake Publishing Company.

Morrison, G. R., Ross, S. M. & Kemp, J. E. (2007). Designing effective instruction, 5th edition. New York: John Wiley & Sons.

Orlich, D. C., Harder, R. J., Callahan, R. C., Trevisan, M. S. & Brown, A. H. (2010). Teaching strategies: A guide to effective instruction, 9th edition. Wadsworth.

Prensky, M. (2001). Digital game-based learning. McGraw-Hill.

Smaldino, S., Lowther, D. L. & Russell, J. D. (2008).Instructional technology and media for learning, 9th edition. Merrill Prentice Hall.

Smith, P. L. & Ragan, T. J. (2005).Instructional design, 3rd edition. New York: John Wiley & Sons.

Thornburg, D. D. (1998). Brainstorms and lightning bolts: Thinking skills for the 21st century. San Carlos, CA: Thornburg Center.

Zook, K. (2001). Instructional design for classroom teaching and learning. Houghton Mifflin.
Performance and Behavioral Outcomes in Technology-Supported Learning: The Role of Interactive Multimedia
Passerini, Katia . Journal of Educational Multimedia and Hypermedia 16. 2 (2007): 183-210.
Turn on hit highlighting for speaking browsers
Show duplicate items from other databases
Abstract (summary)
Translate Abstract
Understanding the impact of different technological media on the achievement of instructional goals enables the delivery of subject matter more effectively. Among the various instructional technologies that advance learning, educators and practitioners recurrently identify interactive multimedia as a very powerful tool for instruction and training. This study measures the effects of multimedia technology on learning project management topics by comparing outcomes of its use to both traditional classroom and text-based instruction. It analyzes learners’ performances within a knowledge representation framework that looks at recall and application of facts, concepts, principles and rules, and procedures as representations of instructional outcomes. A pretest-posttest quasi-experimental design was used to address methodological concerns expressed in earlier comparison analyses. The quasi-experimental design supports the examination of selected instructional objectives achieved within a specific timeframe (short-modules of instruction). The results present an actionable matrix tying the use of a selected medium of instruction to specific learning objectives. While the matrix is developed to take into account different complexity levels of the subject matter, future research is needed to expand this analysis beyond a single field of study. [PUBLICATION ABSTRACT]
Full Text
• Translate Full text

Headnote
Understanding the impact of different technological media on the achievement of instructional goals enables the delivery of subject matter more effectively. Among the various instructional technologies that advance learning, educators and practitioners recurrently identify interactive multimedia as a very powerful tool for instruction and training. This study measures the effects of multimedia technology on learning project management topics by comparing outcomes of its use to both traditional classroom and text-based instruction. It analyzes learners’ performances within a knowledge representation framework that looks at recall and application of facts, concepts, principles and rules, and procedures as representations of instructional outcomes. A pretest-posttest quasi-experimental design was used to address methodological concerns expressed in earlier comparison analyses. The quasi-experimental design supports the examination of selected instructional objectives achieved within a specific timeframe (short-modules of instruction). The results present an actionable matrix tying the use of a selected medium of instruction to specific learning objectives. While the matrix is developed to take into account different complexity levels of the subject matter, future research is needed to expand this analysis beyond a single field of study.
THE USE OF COMPUTERS FOR TRAINING AND EDUCATION: THE UNRESOLVED EFFECTIVENESS ISSUES
Arguments to justify technology use in instruction cover a variety of dimensions. Some researchers have looked at the role that technology plays in student motivation. For example, Gagné and Merrill (1994) identified the need for “gaining learner attention” as a crucial event of the learning process. Several scholars recognized that technology plays a major role in attracting and focusing student attention. Volker (1992) confirmed that successful learning occurs when students are engaged in creating their own products. Keller’s (1983) motivational design model maintained that motivation influences students to choose learning goals and to work toward these goals. Researchers, such as Arnone and Grabowski (1992), furthered Keller’s findings that student motivation is increased by perceptions of “control of their learning,” which is typically supported by several technologies and by interactive multimedia in particular.
Other arguments for technology use in education and training involve the recognition of unique instructional capabilities of some educational media in linking learners to information resources and in helping them to visualize problems and solutions. Unique features, such as tracking progress while the learner reviews the content or automating the distribution of information, relieve faculty from some administrative tasks, such as record keeping, paperwork, and handout reproduction. Some trends in educational technology utilization are supported by theoretical research on learning and cognition. Other trends in instructional technology investments in education and training have not yet found adequate justification through the measurement of learning impact (Roblyer, Edwards, &Havriluk, 1997). Several effectiveness issues have yet to be resolved.
An ongoing controversy in the literature deals with the learning effectiveness of instructional technology. In the last three decades, every time a new instructional technology has been made available, researchers have tried to assess its impact on learning. Key scholars claimed that media, such as text, audio, video, animation, and multimedia influence learning (Kozma, 1991). Many other fundamental authors (Clark, 1983, 1994; Martin & Rainey, 1993; McClure, 1996) claimed that there is no difference in learning outcomes based on the medium. Few scholars (Moore &Kearsley, 1996) argue that researchers should not be asking effectiveness questions because “for any group of students, the environment in which learning occurs and the medium of communication between the teacher and the learners are not significant as predictors of achievement” (p. 65, italics added). Others (Jones &Paolucci, 1998) called for further research. There has not been a final answer, in spite of several studies (Mayer & Anderson, 1991) and meta-analyses conducted over the past 25 years (Bosco, 1986; Fletcher-Flinn& Gravatt, 1995; Kulik, 1994) competing to answer the primary question: “Do computers and related technologies make a difference in learning?”
The problem lies in the formulation of the primary question. Learning is a complex phenomenon and occurs based on several concurrent factors. Asking the question is like asking: “Does the application of learning theories make a difference in learning?” There is no one-size-fits- all answer to a complex phenomenon like learning. Several meta-analyses (Khalili&Shashaani, 1994) merged studies that look at interactive video, computer-based, computer-assisted, and computer-enhanced instruction or computers, as if the findings for each technology could or should be extended to the entire horizon of educational uses of computers.
This article claims that comparative analyses should only include technology with consistent characteristics and functionalities. If multimedia/hypermedia comparative analyses are grouped under the same umbrella, only a few pre-2000 studies have addressed effectiveness. This study answers the call for further research in multimedia (Fletcher-Flinn& Gravatt, 1995; Jones &Paolucci 1998; Liao, 1998). It contributes to solving the effectiveness controversy focusing on the comparative impact of interactive multimedia on learning.
SUPPORTING LEARNING THEORIES
There are several models that support the expectations that interactive multimedia is a highly successful learning environment. Multimedia allows the synchronization of multiple media in hypermedia and hypertext delivery environments. Therefore, it enables the realization of multiple representation systems. Its effectiveness is based on the organization of the knowledge delivery system (structure of the navigation map, coordination of multiple representations), as well as the mode of delivery (type of media used, symbol systems, and media formats). These structural features of multimedia-organization of content, multiple modes of delivery-impact the construction of mental models (Jonassen, 1990) and display a positive effect on the cognitive system.
Relevant learning theories supporting expectations of interactive multimedia positive impact on the cognitive system are summarized in Table 1.
ORGANIZATION AND RESEARCH QUESTIONS
This research compares learning and attitudes in students gaining knowledge of project management topics by attending a class presentation, reading a textbook, or using an interactive multimedia CD-ROM. The study participants were graduate and undergraduate students enrolled in degree programs at a large private University on the East Coast of the United States. In both the undergraduate and graduate population, the study was replicated under comparable conditions with three groups of graduate students and three groups of undergraduate students being exposed to either (a) learning in the classroom, or (b) using a multimedia CD-ROM, or (c) reading a textbook chapter. The purpose of the replication was to measure whether effectiveness-measured by recall and application tasks, and satisfaction-vary by age (denned by graduate and undergraduate status), and by subject matter characteristics (called “complexity level”). To account for differences in complexity levels, the study was replicated again on comparable populations based on differences in the subject matter complexity: three groups of graduate students were exposed to high complexity topics (scheduling tools in project management), while other three groups of graduate students worked with lower complexity topics (change management). The same set up was repeated with undergraduate students. Table 2 presents details of the study replication.
In this study, the definition of learning is built on Merrill’s (1983) performance-content matrix. Based on Merrill’s definition, the effects of the three instructional delivery environments (independent variables) on learning (dependent variable) are evaluated by comparing students’ performances, measured by recall and application, in specific content areas: facts, concepts, principles, and procedures. Student attitudes (dependent variable) are assessed by comparing learner satisfaction with the instructional content and delivery medium. The intervening variables analyzed include student characteristics, such as gender, age, prior knowledge, computer abilities, learning preferences, and the subject characteristics, such as complexity level of the topic (Figure 1).
The learning environment can be broadly defined as a system of interactions that generate a change in human performance (such a change is defined by Discroll, 1994, as the “moment” when learning occurs). It is not just a physical location, such as a classroom, but anyplace, medium, and interaction that permits performance change. Textbooks in a reading room, interactive multimedia CD-ROM in a laptop computer, or an instructor in a classroom are part of the learning environment and influence the type of interactions the learner applies to the instructional content.
To evaluate the comparative effectiveness of the diverse instructional delivery environments, the research focuses on the following question:
* Q1: Which instructional delivery environment best supports learning: multimedia, textbooks, or in-class lectures?
Other related questions addressed in this study include:
* Q1. 1. Which environment is more effective at achieving the learning objectives of recall or application ?
* Q1.2 Which environment is more effective at delivering a lowcomplexity and high-complexity topic?
* Q2. Which environment, if any, is more appealing to graduates and undergraduates?
Recall performance relies on memory. Learners remember a fact, state the definition of a concept, recite a rule, or list the steps of a procedure. Application performance requires the learner to apply the content to a new situation or problem, explain an instance, or differentiate the class/category to which the information belongs. Merrill’s model also includes attitudes, measured in the study as satisfaction and appeal of the instruction, as an outcome of learning. The use of Merrill’s (1983) performance-content matrix model facilitates future implementations of the study findings that can be effectively translated into actionable prescriptions for instruction. For example, if interactive multimedia is a comparatively better learning tool for the recall of facts, it can be used when learners are tasked with remembering a long list of items and related definitions.
Framework and Expectations
Based on the review of the literature on learning with media, inferences on expected comparative effectiveness were made to guide the expected direction of the relationships of the research framework (see Figure 1):
* Multimedia instruction was expected to be the most effective for recall tasks because its interactive characteristics raise attention, motivate users, offer a variety of modes of interactions through feedback and performance assessment and, thus, enhance retention of information by providing “plenty of action and novelty” (Stemler, 1997; Liao, 1998);
* Text instruction was expected as highly effective in procedural and problem-solving application tasks based on studies from Kieras and Bovair (1984) that showed that people reading text instructions infer procedures (application task) more quickly than people using observation, and repeating demonstrated assembly behavior without full comprehension of the steps and mechanisms. Other studies (Palmiter&Elkerton, 1993) found that text can be more effective than animation (motion media) for presenting procedural information. They find that people participating in an animated demonstration for HyperCard authoring procedures initially scored better (in speed and accuracy) than people that only saw textual information. However, in a delayed posttest (one week after the first posttest), scores were either equivalent or better (faster) for people that were exposed to text-only. Najjar (1996) observed that an important explanation for these findings is related to differences in processing efforts (Salomon, 1984; Walker, Jones, & Mar, 1983). The higher effort that learners undertake to read and understand printed information results in improved long-term encoding of information;
* In-class instruction that uses transparencies and presentation slides was also expected to be effective in delivering a variety of content, but stronger in recall tasks than in application tasks since in a lecture presentation, students enjoy both audio and visual reinforcement. Visuals are components of the instructor transparencies. Audio (or sound) is the main means of delivery (the instructor voice). The instructor’s presentation of the written materials on the transparencies allows dual coding (Paivio, 1986) and the increased learning advantages of redundant-media (text and visuals) as well as reinforcement of the verbal presentation (Najjar, 1996);
* Learners’ attitude about the lecture environment would be highly dependent on the presenter’s capabilities, communication, and pedagogical skills as well as the experience and mastery of the subject matter. For example, Janda (1992) showed that the use of slides, outlines, and transparencies increase learners’ motivation. Based on the presenter’s communication skills, students are less or more engaged with the combination of the verbal information and psychomotor movements of the instructor (body language). While contextual factors, such as the use of transparencies and slides are controlled in this study, personal characteristics and experience are not controlled.
The model presented in Figure 1 assumes that a content expert with a higher mastery of the topic may communicate key concepts better that a lecturer with lower mastery (the differentiation between “content expert” and “content novice” presented in Figure 1 is based on the number of years of experience in teaching the subject matter). While this assumption would warrant further discussion and testing, the focus of the study limits the scope of the investigation. Results from the population surveyed show that this assumption may be confirmed (with the outlined limitations) by the direction of the findings (overall satisfaction values were higher with the presenter who enjoyed more years of experienced in teaching the subject matter).
Based on the literature, text was expected to be a particularly ineffective instrument for recall tasks. However, for application tasks with immediate posttesting, it could be as effective as multimedia and in-class instruction. Text could be more effective than multimedia and in-class instruction for several application tasks because it offers higher reflection time and takes advantage of long-term memory. In terms of user interaction with the instructional materials, text is generally the least motivating learning environment. Multimedia instruction was forecasted to be very effective because it presents the highest number of media and offers interactivity and cognitive engagement with the material. Multimedia would be very effective with recall tasks, as well as specific application tasks.
As displayed in Figure 1, this study differentiates the instructional conditions on the basis of the complexity levels of the subject characteristics. In particular, the study looks at two types: hard and soft topics in project management. The criteria for differentiating among complexity levels across disciplines are generalized and applicable within the discipline of instruction. These criteria are based on a universally recognized cognitive taxonomy that organizes learning on the basis of higher-complexity stages of information elaboration. Bloom’s (1956) taxonomy of learning goals and objectives summarizes the way student learning progresses through various stages of increasing complexity.
Content Organization in Textbooks, CD-ROM and In-Class Modules
The pedagogical materials used in this study were selected because they present equivalent content, in their own instructional delivery environment (text, multimedia, and face-to-face instruction).
The textbooks. The textbooks from which the multimedia CD-ROM, and the in-class instruction, are based are two major project management publications.
* Frame (1994). The New Project Management: Tools for an Age of Rapid Change, Corporate Reengineering, and Other Business Realities. The Jossey-Bass Management Series
* Frame (1995). Managing Projects in Organizations: How to Make the Best Use of Time, Techniques, and People. The Jossey-Bass Management Series
Both publications are ranked highly (number of sales, and reviewers’ comments) in the project management literature. They both present similar design and layout features and they both address complementary topics. They use diagrams and drawings to reinforce understanding, offer several examples, and occasionally include mini-case studies to foster reflection and application. The case studies present problems, and offer solutions at the same time, to emulate a mechanism for feedback provision.
The interactive multimedia application. The “Project Management in Organizations (PMO) CD-ROM” is the multimedia version of the books. The CD-ROM closely follows the books’ organization: units mirror chapter titles and structure. The application is designed to teach the fundamentals of project management. The CD-ROM provides a stand-alone tool for learning project management introductory topics. Users are expected to work with the software independently, at their pace. The software includes practice tests and feedback features to self-assess the level of competency reached by using the application. The material is organized into 15 lessons that can be followed in any order, or can be accessed sequentially (following a typical project life-cycle). Each lesson specifies primary learning objectives and is organized consistently in submenus that include:
* Overview: a video of Dr. J. Davidson Frame explaining the topic
* Key Concepts: a bullet list of key concepts and objectives of the lesson
* Real World: audio files with narrators discussing real life applications of the topic
* Activity: “drag and drop,” “drill and practice” activities in a variety of formats for reinforcement
* In Practice: interactive case studies build on real-world experiences.
The key concepts area provides feedback on module organization, and tracks lessons completed by the user. The CD-ROM includes different media formats and features that are designed to leverage prior findings on learning with multiple media. For example, the dual coding theory (Paivio, 1986) maintains that humans process information in two ways: for their meaning (verbal information), and for their visual images (nonverbal information). Dual coding reinforces the understanding and storing of verbal and nonverbal information in memory. Nugent (1982) stated that the dual processing of verbal and nonverbal images increases learning. It follows that, for multimedia applications to enable learning and retention, information needs to be presented with more than one channel (text and graphics, text and audio or other combinations). The interactive multimedia CD-ROM applies these principles by using:
* full motion video (short video-clips summarizing and emphasizing key points of each instructional module);
* audio files (short-contextual audio reinforcement of activities and key concepts that appear in text format);
* text-based animation (dynamic text-based pop-up windows that associate definitions and explanations to elements and graphics on the screen);
* extensive interactivity (with case studies and other exercises, quizzes and simulations);
* high-quality graphics (with drawings and diagrams to clarify concepts); and
* user-friendly navigation interface (based on familiar metaphors, such as office-like settings).
The CD-ROM also offers a Toolbox with examples of tools, access to Resources (calculator, glossary, notepad), and an Organizer, to son information in the program, and access testing. The software includes a Tour, with video explanations of the organization of the program.
The software provides several navigational options and enables different levels of user’s input. The user is often asked to provide a wide range of responses at different levels of interactivity: from typing full answers, to drag and drop (Figure 2), and performing complex calculations.
The face-to-face class sections. The project management modules in this study (“scheduling tools” and “change control procedures”) were developed and taught by Dr. J. Davidson Frame with the support of overhead transparencies. The transparencies are based on the textbooks and use the same graphics, charts, problems, and exercises. The instructor primarily uses markers to write on transparencies during the presentation. Occasionally, display boards are used to support explanations. The instructor offers several examples, asks frequent questions to the audience, provides feedback to responses, and encourages participation through in-class discussion of section problems. Frequently, cases and other exercises are completed in-class, with the instructor support.
In summary, the multimedia CD-ROM offers the variety of media, and content representations. The in-class instruction is supported by visuals, interaction strategies, and delivery structure that maintain the learners engaged with the content. The textbooks are organized sequentially; provide cases, and contextual written feedback.
DATA COLLECTION METHODS
This study uses a quasi-experimental design, which involves one factor (learning project management topics such as scheduling tools and change control) with three treatments (learning from the in-class lecture, reading the textbook, or using the interactive multimedia CD-ROM). This study uses the in-class lecture (IC) group as a benchmark of the analysis (control). The subjects’ prior knowledge of project management is measured before and after the experimental treatments by distributing the same test questionnaire (with questions in a different order). Because all groups receive the same pretest and posttest questionnaires, the testing effect (interaction of the subject with testing) is assumed to influence the posttest results equally. To verify the assumption, and further test the comparability of the groups, pretest results are also compared across groups, and as covariates in the analysis.
The study is replicated across topics such as “scheduling tools” (higher complexity topic, or “hard-topic”) and “change control” (lower complexity topic, or “soft-topic”), and across groups of undergraduate students (Ug) and graduate students (Gr). The two age populations are further split into two sub-populations (Hard-Ug& Soft-Ug; Hard-Gr & Soft-Gr). Students enrolled in three different sections of the same course in each of the subpopulations are randomly assigned to the three treatments (in-class lecture, interactive multimedia, and textbook instruction). The in-class lecture treatment groups receive instruction either from the “same” instructor (Instructor 1, highly experienced) authoring the instructional materials, or from a “different” instructor (Instructor2, fairly experienced), the latter using the same lecture notes, handouts, flow of presentation but enjoying fewer years of experience in project management. The rationale for maintaining the “same” instructor across graduate groups, as well as using a “different” instructor in the undergraduate groups is related to controlling for both teaching-method effects (Clark, 1983), and for teaching-diffusion effects (Kulik&Kulik, 1986).
DATA ANALYSIS AND DISCUSSION
A preliminary step in evaluating performance outcomes was to determine whether pretest scores are equal across each population of interest. The results of the ANOVA analysis showed that pretest scores were not equivalent among the three instructional delivery environments (multimedia, in-class, and textbook instruction) in at least two groups (hard-graduate and soft-graduate). Since pretest scores were different-that is, groups were not equal with regard to prior knowledge-comparisons were conducted on the amount of learning improvement (the difference between posttest and pretest scores) rather than on total posttest scores.
Finding group differences in pretest results required a further examination of the strength of the relationship between pretest and posttest scores. Multiple analyses of covariance were conducted, with the pretest score acting as a covariate of the dependent variable (posttest score). The entire analysis is available from the author. The ANCOVAs showed that pretest scores were significant covariates in two groups, but the strength of this relationship was low-to-moderate (with the covariate accounting for 8% or 12% of the dependent variable). Nevertheless, since pretest scores were significant covariates in at least two of the four groups analyzed, the entire data analysis focused on differences between posttest and pretest results.
LEARNING DIFFERENCES BY INSTRUCTIONAL ENVIRONMENT
With regard to Question 1 (and related questions) a nondirectional hypothesis was formulated to simplify the multiple comparisons levels
H^sub A, 1^ : The average difference between posttest and pretest scores of subjects participating in different types of instruction in project management core topics (in-class instruction “IC;” interactive multimedia “M;” textbook/s T”) is not equal at least in one of the populations of interest.
The General Linear Model in the form of Analysis of Variance (ANOVA) was used to focus on the relationships between all discrete independent variables (types of instructional delivery environments) and the continuous dependent variable (average difference between posttest and pretest). In each group, the F probability less than 0.05 represented the upper limit to reject the null hypothesis of equality between average differences (betweensample and within-sample variation). The use of the “a posteriori test” (the Scheffé’s S Method, for equal variances, or the Dunnett C for unequal variances) identified which population mean was significantly different. Table 3 summarizes significant relationships, and ranks the differences in learning (with number 1 being the highest difference, and number 3 the lowest difference amount). For example, the graduate group learning a project management high complexity topic such as scheduling tools (hard-graduate) showed significant differences in learning by treatment conditions, with the in-class group learning displaying the highest comparative difference between posttest and pretest score than the interactive multimedia group. The least learning in this group occurred in the class using the textbook only.
Resulting relationships based on the data were summarized in Table 4. The matrix reports that only the two groups display significant results at the 95% level of confidence (Hard-Gr and Soft-Ug in Table 3). The data also indicates that the relative effectiveness of an instructional delivery medium is topic dependent. Any discussion on learning outcomes needs to focus on the differences between hard and soft topics. For higher complexity topics, in-class instruction is the most effective learning environment; for lower complexity topics, multimedia is more effective than in-class instruction. Although not statistically significant, the data regarding other topic-population relationships, specifically hard-undergraduate and soft-graduate, support these outcomes.
Learning Differences by Recall and Application
Question 1.1 looks at the differences in performance scores and breaks down posttest results into recall and application tasks.
H^sub A, 1.1^: The average difference between posttest and pretest scores in questions entailing recall/application of facts, concepts, principles, and procedures, answered by subjects participating in different types of instruction in project management core topics (in-class instruction “IC;” interactive multimedia “M;” textbook/s “T”) is not equal at least in one of the populations of interest.
A multivariate analysis of variance was used to measure learning outcomes in more than one dependent variable. MANOVA analysis assessed the relationships between treatments (discrete independent variables), and the dependent variates (Hair, Anderson, Tatham, & Black, 1998, p. 326) such as recall (facts, concepts, principles and procedures) and application (facts, concepts, principles and procedures). The test used the Wilk’s Lambda to evaluate the statistical significance of the model. The Pillai’s criterion or the hotelling’s trace, or Roy’s gcr criterion, which are similar to Wilk’s Lambda (Hair 1998, p. 351), were reviewed as well. Since MANOVA is used to assess overall differences among groups, separate univariate tests were also employed. Paired-sample f-tests were employed to establish the direction of change of the posttest-pretest differences displaying statistically significant results.
Similarly to Table 4, Table 5 summarizes significant relationships, and ranks the differences in learning. Details on each group are available from the researcher.
The matrix presented in Table 6 is derived from the data analysis and shows significant differences in the relationship between media type and learning objective based on group and topic complexity. While in-class instruction displays the highest impact in term of recalling and applying relatively high complexity topics, textbook and multimedia instruction appeared equally effective for learning lower complexity topics. In-class instruction is found to be the least effective to achieve recall and application of soft topics.
The results from the data analysis summarized in the matrix indicate that interactive multimedia is the favored environment for recall and application of soft topics, but the data are not statistically significant for the graduate student group. Therefore, the null hypothesis, that there are no differences in any of the instructional environments in the soft-graduate and the hard-undergraduate groups, cannot be rejected.
To further understand the relative effectiveness of the treatments, a matrix of learning objectives by task was developed for each set of variables (facts, concepts, principles, and procedures learning tasks) on the basis of the paired-sample t-test results. The matrix in Table 7 is important for several reasons.
1. It shows significant differences in the recall and application of the different types of tasks in which learners are asked to show proficiency.
2. It shows the media that repeatedly displayed effectiveness in specific tasks. Although not all the relationships between learner objectives, learning tasks, and media preference were statistically significant, they point to important trends about media environments and learning.
3. It is a useful reference for choosing which medium to use for accomplishing specific objectives.
The findings of media effects by tasks and topics permit a series of conclusions about media selection with regard to hard and soft topic learning. Table 7 shows that recall performance in high complexity topics is repeatedly higher for graduate students exposed to in-class instruction. However, for undergraduate students, interactive multimedia is particularly effective for the recall of hard-topic concepts and procedures, and textbook instruction is more effective for the recall of complex principles and rules. Application performance is generally higher among those who learned by in-class instruction. Nevertheless, for learning procedures, students application performance is higher when they used textbooks. This finding confirms the literature from Kieras and Bovair (1984).
Recall performance in lower complexity topics is consistently higher in interactive multimedia instruction than in other media, except when the task involves recalling concepts. In-class lecture by an above-average instructor is found to be the most effective learning environment for student recall of concepts. However, in-class instruction effectiveness is consistently low in the soft-undergraduate group. This leads to the conclusion that when a fairly experienced speaker (the content novice in Figure 1) presents the lecture to a soft-undergraduate group, performance is higher with multimedia and textbook instruction. Learners may prefer using class time for more complex topics while learning lower complexity topics on their own. Not only is multimedia the most effective media for the recall of soft topics, textbook learning also produces better learning outcomes than in-class instruction.
Data regarding application performance on lower complexity topics show that multimedia is very effective for both graduate and undergraduate learner groups. However, similar to recall performance, in-class instruction is more effective for graduate learners tasked with applying concepts. Undergraduate students learn concepts and procedures better when they use textbooks. It is to be noted that while the topic complexity differentiation may help extend the study to other disciplines, the results of Table 7 refer to the learning of project management topics. Additional research is needed to further validate whether, for example, learning procedures and concepts in physics yield results in the same directions. These limitations will be further discussed in the conclusions portion of this article.
Learning and Topic Complexity
With regard to question 1.2, the analysis was again conducted on the average difference between posttest and pretest scores.
The pretest and posttest cumulative values were computed by dividing intermediate results (total by type of questions) and overall results by the number of questions (19 vs. 14) on each test. Data are combined to aggregate soft and hard topics across groups. Since the distribution of this variable is not normal, non-parametric tests are employed. The Kruskal-Wallis and separate Mann-Whitney nonparametric (NPAR) tests findings are summarized in Table 8.
The ranks output in Table 8 shows that the average difference between Hard and Soft topic is significant (at Chi-square p=0.025). Precisely, the Hard topic group displays-overall-higher improvements than the Softtopic group and these differences are determined by different impact of the instructional treatments across Hard and Soft topics as displayed by Figure 3 which represents a visualization of the nonparametric (NPAR) test results.
In terms of comparisons of instructional delivery environments, higher learning occurs in the graduate lecture (first) and CD-ROM (second) groups, and in the undergraduate CD-ROM (first). A final run of the Kruskal-Wallis test (Table 9) associates statistical significance to these rankings.
Because the data analysis shows that differences in learning are related to the complexity characteristics of the topic/discipline being taught, any discussion of effectiveness reported the differences and the nature of the complexity characteristics. Corroborating the earlier discussion on subject matter difference, these results show that additional research could focus on identifying groups of subject matters (based for example on complexity characteristics) that would benefit more from interactive multimedia support.
Learning Attitudes
With regard to the question on which environment is more appealing, the study analyzed satisfaction across different groups
H^subA, 2^: The average satisfaction of subjects participating in different types of instruction (in-class instruction “IC;” interactive multimedia “M;” textbook/s “T”) is not equal at least in one of the populations of interest (graduate and undergraduate).
To answer this question, data on satisfaction with soft and hard topics were aggregated by group (graduate and undergraduate). The results from the combined data are summarized in Table 10.
After combining the results by learner population (graduate and undergraduate), the level of satisfaction with the learning experience was found to be higher in the graduate group than in the undergraduate group. The highest level of satisfaction in the undergraduate group occurs in the in-class instruction group. This finding is particularly interesting when comparing learner performances in the undergraduate group with their reported satisfaction with the instructional experience. Although undergraduate performance in soft topic lectures was low to negative, their reported satisfaction with the experience was either equal or higher to their satisfaction with interactive multimedia. The results imply that learner satisfaction with the instructional experience is not associated with higher performance. Multiple ANCOVAs on satisfaction as a covariate of performance improvements show that the level of satisfaction is not correlated to learning outcomes. This is an interesting area for further research as it apparently contradicts models such as that set forth in flow state theories and emotional intelligence (Goleman, 1995).
Overall Findings
The expected relationships presented in Figure 1 were only partially confirmed in the attitudes measures (satisfaction and appeal). The results highlighted that relationships vary depending on topic complexity of the subject matter (in this case, project management). Table 11 shows the differences between expected (Figure 1) and actual relationships.
Several inferences can be made based on the results:
* Multimedia instruction is not the most effective learning environment for recall tasks. Text is as effective as multimedia to promote recall of lower complexity topics, while in-class instruction is more effective than multimedia for the recall of higher complexity topics:
* The high effectiveness of textbooks (as effective as multimedia) in the recall of soft topics illustrates the importance of visual learning both at the dynamic and static levels. The visual reinforcement available both in the textbooks, through charts and diagrams, as well as in the multimedia CD-ROM, through graphics and contextual help. More specifically, both dynamic and static visualization appears to play an important role, at least within the context of the specifie subject matter (project management),
* The high effectiveness of in-class instruction for the recall of highcomplexity topics suggests that the ability to recall is also related to factors such as class environment, instructor presentation, verbal reinforcement, and quality (here represented by experience) of the instructor. Face-to-face interaction remains a medium for rich communication, and one that is the most dynamic and adaptive to the environmental conditions of the learners. This suggests that for effective learning, synchronous and contextual interaction with the content owner (either face-to-face or technology-mediated) remains a fundamental ground-rule for good pedagogy.
* The in-class environment might be associated with better performance and behavioral results because learners are mostly focused on the lecturer and do not undertake other tasks, such as navigation of the multimedia application, that might conflict with their memorization efforts.
* Text instruction is as effective as multimedia for learning lower complexity topics. Thus, low-complexity topics are suitable for selfpaced and independent learning.
* For higher complexity topics, in-class instruction is the most effective learning environment.
* Text is found to be more effective than in-class instruction only when the learning objectives refer to application of procedures, which is consistent with earlier findings in the literature (Palmiter&Elkerton, 1993) described in the section “Framework and Expectation.” However, caution should be applied when extending these findings to other subject areas.
* Overall, in-class instruction is observed to be highly effective for higher complexity topics both in recall and in application tasks.
* Attitude is not found to be as highly dependent on presenter capabilities and communication skills (as earlier expected in the study framework) except when soft topics are being presented:
* Within each population (undergraduate and graduate), learners show a comparable level of satisfaction with multimedia and inclass instruction, both significantly higher than their satisfaction with textbooks. However, with a content novice (not highly experienced speaker), they prefer using interactive multimedia when a soft topic is being presented. This might be explained by an interest in using class-time for challenging topics and leaving less complex study to self-paced learning.
* However, when grouping together both graduate and undergraduate students to identify the overall level of satisfaction with their instructional experience, the higher satisfaction ratings are associated with the higher content expert suggesting that future research should be conducted to identify the relationships among various drivers of satisfaction in general, and the expertise of the speaker/presenter in particular.
OPEN PROBLEMS AND FUTURE RESEARCH
This study uses short-modules of instruction as a research strategy to eliminate the effect of competing variables-such as time of study, repeated use of the material, and so forth-which might occur outside of a controlled laboratory environment. However, the use of specific short-modules of instruction (such as those listed in Table 2) could bias results toward recall tasks instead of application tasks. While specific application skills are gained in a short time frame-as based on the learners understanding and processing of the information presented-a major portion of application skills is best acquired through longer instruction modules. Students’ selfpaced study and review of instructional materials enhance the ability to apply the newly acquired knowledge. The findings of this study with respect to application tasks, therefore, are mostly limited to identifying application knowledge obtainable through short segments of instruction. A longer exposure to the subject topic across the instructional media would influence (in an expected positive direction) students’ application skills and higher cognitive levels.
This study is also limited because it looks at a specific subject: project management topics. Complexity levels of the subject matter are reviewed in this study (differentiating between hard and soft topics and repeating the treatment in both groups), but they are assessed only within the same subject matter. Future research should look at findings in other disciplines that require different approaches to instructional design and might present valuable implications for learning.
Additional research efforts could be directed toward replicating the research methodology in studies employing longer modules of instruction to identify whether the relationships between media choice and learning outcomes change as a function of time. Interesting implications may be found in longitudinal studies that assess the retention of the learned topics. Although in-class instruction has been found highly effective, it might not be as effective for long-term retention as it is for short-term recall.
The research may also be extended to web-based instructional environments to identify whether effectiveness by in-class instruction varies when the learners have an opportunity to interact with the instructor on the Internet. Interactive multimedia authoring software is increasingly enabling the transfer of high quality media segments over interconnected networks and through fast modems. Video streaming technologies and animation plug-ins (Macromedia® Shockwave, Real Networks® Real Player, and Microsoft® Media Player) transfer multimedia content in the unbound environment of web-based instruction. Their application opens further areas for interesting research comparisons on one of the promises of web-based education: supplementation of asynchronous learning typical of self-paced multimedia instruction with the synchronous communication available through web-based courseware.
Comparisons on learner performance and behavioral outcomes with regard to interactive and web-based delivery of multimedia will enable further evaluation of interactivity in computer-mediated learning environments. Several other interesting research applications include comparisons of traditional distance learning programs (correspondence learning through textbooks and video-based instruction with tapes) with multimedia content delivered over the Web. These comparisons could provide the tools for understanding the weight of “synchronous interaction with the instructor” on students’ learning and motivation. They could provide the right recipes to strategically combine self-paced, instructor-led, and computer-mediated instruction not only to successfully achieve learning objectives but also to improve learner attitudes toward the instructional experience.
Summary and Contributions
This research indicates that the effectiveness of technology-based learning is dependent upon the nature of the presented topic. In-class instruction is more suitable for high-complexity topics, while those studying lowercomplexity topics benefit from self-paced learning using interactive multimedia software. In terms of learning objectives, student recall performance is higher than application performance in a short-module of instruction.
Positive attitudes toward interactive multimedia and textbooks are statistically higher than in-class instruction when the latter uses a fairly experienced speaker (content novice in Figure 1) to deliver a soft topic. Attitudes toward in-class instruction are higher than attitudes towards textbooks, but fairly equal to multimedia, when the speaker is highly experienced (content expert) or when the topic of instruction is highly complex.
In addition, differences in the relationships on comparative performance depend on the nature of the learning task: learning effectiveness of a medium is also related to whether the student is tasked with showing proficiency in the recall or application of facts, concepts, principles, or rules. In an unexpected finding, student performance is not significantly correlated with their attitudes toward the instructional environment.
The contribution of this research with respect to the advancement of interactive multimedia research is twofold. First, it increases the number of studies in this field. The research responds to the call for further research that focuses on interactive multimedia because, despite the various technology effectiveness publications, only a few studies on multimedia and hypermedia were found. Second, it delineates differences in performance by comparing outcomes with tasks. This is a new research direction in the area.
Finally, the number of replications enables capturing important differences in learning various types of topics. This differentiation and various replications are key outcomes of this research. The results show that to effectively compare performance and behavioral outcomes, the level of topic complexity must be distinguished. This research leverages well-known cognitive taxonomies to assess complexity levels and extends this analysis to different learning tasks (the recall and application of facts, concepts, principles, and procedures).
References
Arnone, M., & Grabowski, B. (1992).Effects on children’s achievement and curiosity of variations in learner control over an interactive video lesson. Educational Technology, Research and Development, 40(1), 15-27.
Bosco, J. (1986). An analysis of evaluations of interactive video.Education Technology, 25,7-16.
Bloom, B. S. (Ed.). (1956). A taxonomy of educational objectives: Handbook I: The cognitive domain. New York: McKay.
Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 53(4), 445-459.
Clark, R. E. (1994). Media will never influence learning. Educational Technology Research and Development, 42(2), 21-29.
Cunningham, D. J., Duffy, T. M., & Knuth, R. A. (1993). The textbook of the future. In C. McKnight, A. Dillon, & J. Richardson (Eds.), Hypertext: A psychological perspective (pp. 19-50). Chichester, England: Ellis Horwood.
Driscoll, M. P. (1994).Psychology of learning for instruction. Boston: Allyn and Bacon.
Duffy, T. M., & Knuth, R. A. (1990). Hypermedia and instruction. Where is the match? In D. H. Jonassen& H. Mandl (Eds.), Designing hypermedia for learning (Vol. 67, pp. 199-225). Berlin: Springer-Verlag.
Fletcher-Flinn, C. M., &Gravait, B. (1995). The efficacy of computer assisted instruction (CAI): A meta-analysis. Journal of Educational Computing Research, 12, 219-242.
Frame, J. D. (1994). The new project management: Tools for an age of rapid change, corporate reengineering, and other business realities. San Francisco: Jossey-Bass.
Frame, J. D. (1995) Managing projects in organizations: How to make the best use of time, techniques, and people. San Francisco: Jossey-Bass.
Gagné, R. M., & Merrill, M. D. (1992). Integrative goals for instructional design. Educational Technology Research and Development, 38(1), 23-30.
Gardner, H. (1993). Multiple intelligences: The theory in practice. New York: Basic Books.
Goleman, D. (1995). Emotional intelligence: Why it can matter more than IQ. New York: Bantam Books.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998).Multivariate data analysis. Upper Saddle River, NJ: Prentice Hall.
Jacobson, M. J., & Spiro, R. J. (1995). Hypertext learning environments, cognitive flexibility, and the transfer of complex knowledge: An empirical investigation. Journal of Educational Computing Research, 12(4), 301-333.
Janda K. (1992). Multimedia in political science: Sobering lessons from a teaching experiment. Journal of Educational Multimedia and Hypermedia, 1(3), 341-354.
Jonassen, D. H. (1990). Semantic network elicitation: Tools for structuring hypertext. In C. Green & R. McAleese (Eds.), Hypertext: State of the art. Oxford, UK: Intellect Books.
Jones, T. H., &Paolucci, R. (1998, Spring/Summer). The learning effectiveness of educational technology: A call for further research. Educational Technology Review, (pp. 10-14).
Keller, J. M. (1983). Motivational design of instruction. C.M. Reigeluth (Ed.), Instructional design theories and models: An overview of their current status (pp. 383-434). Hillsdale, NJ: Lawrence Erlbaum.
Khalili, A., &Shashaani, L. (1994). The effectiveness of computer applications: A meta-analysis. Journal of Research on Computing in Education, 27(1), 48-61.
Kieras, D. E., &Bovair, S. (1984). The role of a mental model in learning to operate a device.Cognitive Science, 8, 255-273.
Kozma, R. B. (1991). Learning with media. Review of Educational Research, 61(2), 179-211.
Kulik, C. C., &Kulik, J. A. (1986). Effectiveness of computer-based instruction in colleges. AEDS Journal, 19, 81-108.
Kulik, J. A. (1994). Meta-analytic studies of findings on computer-based instruction. E. L. Baker & H. F. O’Neil (Eds.), Technology assessment in education and training (pp. 9-33). Hillsdale, NJ: Laurence Erlbaum.
Liao, Y. C. (1998). Effects of hypermedia versus traditional instruction on students’ achievement: A meta-analysis. Journal of Research on Computing in Education, 30(4), 341-359.
Martin, E. D., & Rainey, L. (1993). Achievement and attitude in a satellite-delivered high school science course.The American Journal of Distance Education, 7(1), 54-61.
Mayer, R. E., &. Anderson, R. B. (1991). Animations need narrations: An experimental test of a dual-coding hypothesis. Journal of Educational Psychology, 83, 484-490.
McClure, P. A. (1996). Technology plans and measurable outcomes. Educom Review, 31(3), 29-30.
Merrill, M. D. (1983). Component display theory. C. M. Reigeluth (Ed.), Instructional-design theories and models: An overview of their current status (pp. 282-333). Englewood Cliffs, NJ: Lawrence Erlbaum.
Moore, M. G., &Kearsley, G. (1996). Distance education. Belmont, CA: Wadsworth.
Najjar, L. J. (1996). Multimedia information and learning.Journal of Educational Multimedia and Hypermedia, 5, 129-150.
Nugent, G. (1982). Pictures, audio, and print: Symbolic representation and effect on learning. Educational Communication and Technology Journal, 30, 163-174.
Paivio, A. (1986). Mental representations. A dual-coding approach. New York: Oxford University Press.
Palmiter, S., &Elkerton, J. (1993).Animated demonstrations for learning procedural computer-based tasks.Human-Computer Interaction, 8(3), 193-216.
Roblyer, M., Edwards, J., &Havriluk, M. A. (1997).Planning and implementation for effective technology integration. Integrating educational technology into teaching (pp. 27-53). Upper Saddle River, NJ: Merrill, Prentice Hall.
Salomon, G. (1984). Television is “easy” and print is “tough”: The differential investment of mental effort in learning as a function of perceptions and attributions. Journal of Educational Psychology, 76, 647-658.
Spiro, R. J., Feltovich, P. J., Jacobson, M. J., & Coulson, R. L. (1991). Cognitive flexibility, constructivism, and hypertext: Random access instruction for advanced knowledge acquisition in ill-structured domains. Educational Technology, 31(5), 24-33.
Stemler, K. L. (1997). Educational characteristics of multimedia: A literature review. Journal of Educational Multimedia and Hypermedia, 6(3/4), 339-359.
Tulving, E. (1983).Elements of episodic memory. London: Oxford University Press.
Volker, R. (1992). Applications of construct!vist theory to the use of hypermedia. Proceedings of Selected Research Presentations at the Annual Convention of the AECT.
Walker, N., Jones, J. P., & Mar, H. H. (1983). Encoding processes and the recall of text. Memory and Cognition, 11, 275-282.

 
Discussion Question 1
List the methods of determining goals and objectives for instruction. How are these methods similar to each other? What key elements do they have in common? How do these methods differ from each other?
Discussion Question 2
Select a relevant unit of your choice and discuss how you would determine the appropriate goals and objectives for your chosen unit? How would you distinguish between goals and objectives for this unit? On what basis would you develop goals and objectives for this unit?

ORDER THIS ESSAY HERE NOW AND GET A DISCOUNT !!!

 

 

You can place an order similar to this with us. You are assured of an authentic custom paper delivered within the given deadline besides our 24/7 customer support all through.

 

Latest completed orders:

# topic title discipline academic level pages delivered
6
Writer's choice
Business
University
2
1 hour 32 min
7
Wise Approach to
Philosophy
College
2
2 hours 19 min
8
1980's and 1990
History
College
3
2 hours 20 min
9
pick the best topic
Finance
School
2
2 hours 27 min
10
finance for leisure
Finance
University
12
2 hours 36 min
[order_calculator]