10 Aug 2017

8 Free Interactive Video Tools to Impact Student Learning

As educators, we all know that videos engage students more than reading texts. Although having students analyze and reflect on videos should be balanced with textual analysis and interpretations, videos do have the added value of using the visual and auditory channels to help students retain more information so that they can be in a better position to  deconstruct the messages encoded in the video, reflect on it, and discuss with peers. However, like reading texts, especially long intricate texts, students need embedded formative feedback. Watching a 20 minute video for example might disengage a student, or might include more information than the student can retrieve. The best solution to help students think about the video they are watching is embedded questions and discussions.

This is why we have listed 8 free video tools that can help you, more or less, build activities or questions around videos students watch at home as part of a blended, online  or flipped learning course/class. We are presenting them in preference of open source technologies as we support and acknowledge the efforts put into open source technologies as opposed to for-profit edtechs.

1. H5P (open source)

2017-08-10_23-32-41H5P is much more than an interactive video platform. It has so many possibilities. But for this post, we are only discussing its interactive video feature. H5P Interactive video is an HTML5 based interactive video content type allowing users to add multiple choice and  fill in the blank questions, pop-up text and other types of interactions to their videos using only a web browser. What we also found awesome is that  you can make your videos more engaging with H5P and interactive video on WordPress, Moodle and Drupal; and it lets you track student performance. Here’s an example we did some while ago.


Here’s another example.


2. Videonot.es (open source)

Sometimes you just want to have you students take intermittent notes on particular time lapses of the video. Whether note-taking, posing questions, self-questioning, reflecting, or just summarizing, students use these techniques to improve their performance. Videonote.es is a powerful online video notes tool that lets students take all the notes they type automatically synchronized with the video. Later, they just click on a line for the video to jump to the relevant part. Videonot.es is integrated with Google Drive. So any student can save their notes to Google Drive. Students can share their Video.notes file on G Drive to share with teachers for assignment feedback.




3. Office Mix (free)

2017-08-10_23-29-52I know you adore PowerPoint. Don’t we all?Fortunately,  Microsoft has added a PowerPoint add-in, Office Mix that turns your PowerPoint presentation into an interactive video, for free. Yup! for free.

You can add audio, video, and digital ink; create polls and interactive apps; create quizzes and simulations; design assessments and get reports; gain insights and analytics of video interactors; and it can playback on any device. Microsoft has created a decent set of tutorials for Office Microsoft. It also has a page just for educators to support their classroom teaching for blended, flipped or completely online instructions. Download and install it here. Here’s an example.

4. Vialogues (free, registration needed)

2017-08-10_23-35-39Short for video dialogues, Vialogues claims that it helps anyone to start meaningful discussions around videos. Being built by Edlab, Teachers College at Columbia University, we don’t have any reason not to believe it delivers what it promises. Vialogues includes 4 easy steps to get started: Create, Invite, Interact, and Share.

An award-winning discussion platform that proves that videos are both powerful teaching resources and the ultimate conversation starters. Vialogues provides a space for users to hold meaningful and dynamic time-stamped discussions about videos.


5. Videoposit (freemimum for individual account)

2017-08-10_23-45-09Lately, I’ve heard a lot of positive feedback on Videoposit by teachers. Based on the language used on Videoposit website, it seems it is mostly geared towards higher education and corporate settings, although they claim that k12 school setting is also supported. Videoposit claims to improve professional development and on-boarding of instructors/employees. It says it renders effortless authorship, learner engagement, accountable tracking, and seamless workflow.

I wouldn’t trust a website that uses its owner’s pet dog as a logo, but you are welcome to try it anyway.


6. Edpuzzle (freemimum for individual account)

2017-08-11_0-10-53Edupuzzle claims it is the easiest way to engage students with videos by picking a video, adding a magical touch and tracking students’ understanding. Edpuzzle saves time and improves student learning by taking an already existing video on Youtube, Khan Academy, Crash Course etc. or uploading your own, by enabling self-paced learning with interactive lessons, adding one’s voice and questions along the video, and by knowing if your students are watching your videos, how many times and see the answers they give. Edpuzzle is also available as an Android and iOS app, and ass a Chrome extension, a Youtube extension.


7. TEDed Lessons (free)

2017-08-11_0-20-23If you are like me, you would binge watch TED talks. They are tremendously inspiring. TEDed Lessons was created to build lessons around TED videos, or any other video as well. The video questions are sorted into four categories: Watch (student watches video.), Think (student answers multiple choice questions.), Dig Deeper (Student answers a subjective question or follows some additional resources.), and Discuss( Students discuss the video with peers.). This categorization is a great way for differentiating learning in terms of cognitive processes. The technical difference between TEDed lessons and Edupuzzle, Videoposit, and H5P above is that the questions in TEDed lessons are not embedded in the video. The student watches the whole video and answers the questions, although toggling between watching and answering questions is an option too (perhaps it is better for students to choose whether to answer questions whilst watching or later?).


8. Google Forms? (free)

2017-08-11_0-22-27I know what you are thinking! Again? Back to Google? Well, the generic aspect of G Suite is that it can allow you to remix anything you want to produce what you need for your instructional objectives. Using Google Form Quiz template, you can embed a YouTube video followed by questions on the video. Although the questions are not embedded in the video, itself this helps students check their understanding and think deeper about the issues in the video. You can also add YouTube video questions at consequent times in the video so that students watch the video and answer questions before moving on to the next part of the video. For example, question 1 would be a YouTube video that starts at 0 sec. and ends at 1 min. Question 2 includes the same video start time at 1 min. and ends at 3 min. and so on (see here on how to do it).


We hope you liked the interactive video tools above. Have you used any of these before? Are they new to you? Are you willing to try one this school year? Share your thoughts in the comment box below

Share this
07 Aug 2017

Improving Student Learning with Effective Learning Techniques Part 3: Summarization

Students often need to read and understand a lot of information by extracting the more important ideas. This requires discarding less important ideas and connecting ideas within a text. Accomplishing these goals requires student to write summaries of to-be-learned texts (often as part of or pre-requisite to  text analysis and evaluation). Although summarizing a text is considered an instructional goal of its own right, the post is only concerned whether improve student performance on subsequent criterion tests on the same materials.

 

Description and Why it should work

As an introduction to the issues relevant to summarization, we begin with a description of a prototypical experiment. Bretzing and Kulhavy (1979) had high school juniors and seniors study a 2,000-word text about a fictitious tribe of people. Students were assigned to one of five learning conditions and given up to 30 minutes to study the text. After reading each page, students in a summarization group were instructed to write three lines of text that summarized the main points from that page. Students in a note-taking group received similar instructions, except that they were told to take up to three lines of notes on each page of text while reading. Stu- dents in a verbatim-copying group were instructed to locate and copy the three most important lines on each page. Students in a letter-search group copied all the capitalized words in the text, also filling up three lines. Finally, students in a control group simply read the text without recording anything. (A subset of students from the four conditions involving writing were allowed to review what they had written, but for present purposes we will focus on the students who did not get a chance to review before the final test.) Students were tested either shortly after learning or 1 week later, answering 25 questions that required them to connect information from across the text. On both On both the immediate and delayed tests, students in the summarization and note-taking groups performed best, followed by the students in the verbatim-copying and control groups, with the worst performance in the letter-search group

(see Fig. 3).

2017-08-08_1-08-15

 

 

Bretzing and Kulhavy’s (1979) results fit nicely with the claim that summarization boosts learning and retention because it involves attending to and extracting the higher-level meaning and gist of the material. The conditions in the experiment were specifically designed to manipulate how much students processed the texts for meaning, with the letter-search condition involving shallow processing of the text that did not require learners to extract its meaning.

More than just facilitating the extraction of meaning, however, summarization should also boost organizational processing, given that extracting the gist of a text requires learners to connect disparate pieces of the text, as opposed to simply evaluating its individual components (similar to the way in which note-taking affords organizational processing;

So how strong is the evidence that summarization is a beneficial learning strategy? One reason this question is difficult to answer is that the summarization strategy has been implemented in many different ways across studies, making it difficult to draw general conclusions about its efficacy.

Studies show that “summarization is not one strategy but a family of strategies”. Depending on the particular instructions given, students’ summaries might consist of single words, sentences, or longer paragraphs; be limited in length or not; capture an entire text or only a portion of it; be written or spoken aloud; or be produced from memory or with the text present.

The focus on training students to summarize reflects the belief that the quality of summaries matters. If a summary does not emphasize the main points of a text, or if it includes incorrect information, why would it be expected to benefit learning and retention? Consider a study by Bednall and Kehoe (2011, Experiment 2), in which undergraduates studied six Web units that explained different logical fallacies and provided examples of each. Of interest for present purposes are two groups: a control group who simply read the units and a group in which students were asked to summarize the material as if they were explaining it to a friend. Both groups received the following tests: a multiple-choice quiz that tested information directly stated in the Web unit; a short-answer test in which, for each of a list of presented statements, students were required to name the specific fallacy that had been committed or write “not a fallacy” if one had not occurred; and, finally, an application test that required students to write explanations of logical fallacies in examples that had been studied (near transfer) as well as explanations of fallacies in novel examples (far transfer). Summarization did not benefit overall performance, but the researchers noticed that the summaries varied a lot in content; for one studied fallacy, only 64% of the summaries included the correct definition. Higher-quality summaries that contained more information and that were linked to prior knowledge were associated with better performance.

Several other studies have supported the claim that the quality of summaries has consequences for later performance. Garner (1982) showed that the quality of summaries matters: Under- graduates read a passage on Dutch elm disease and then wrote a summary at the bottom of the page. Five days later, the students took an old/new recognition test; critical items were new statements that captured the gist of the passage. Students who wrote better summaries (i.e., summaries that captured more important information) were more likely to falsely recognize these gist statements, a pattern suggesting that the students had extracted a higher- level understanding of the main ideas of the text.

 

Generalizability

Learning conditions

Many different types of summaries can influence learning and retention; summarization can be simple, requiring the generation of only a heading or a single sentence per paragraph of a text , or it can be as complicated as an oral presentation on an entire set of studied material . Whether it is better to summarize smaller pieces of a text (more frequent summarization) or to capture more of the text in a larger summary (less frequent summarization) has been debated . The debate remains unresolved, perhaps because what constitutes the most effective summary for a text likely depends on many factors (including students’ ability and the nature of the material).

One other open question involves whether studied material should be present during summarization. Few studies pointed out that having the text present might help the reader to succeed at identifying its most important points as well as relating parts of the text to one another. However, summarizing a text without having it present involves retrieval, which is known to benefit memory, and also prevents the learner from engaging in verbatim copying. The answer to whether studied text should be present during summarization is most likely a complicated one, and it may depend on people’s ability to summarize when the text is absent.

 

Student Characteristics

Most of the research on individual differences has focused on the age of students, because the ability to summarize develops with age. However, younger students (e.g., middle school students) can benefit from summarization following extensive training and this benefit was linked to improvements in note-taking. It also it seems plausible that students with more domain-relevant knowledge would be better able to identify the main points of a text and extract its gist.

 

Materials

For the most part, characteristics of materials have not been systematically manipulated, which makes it difficult to draw strong conclusions about this factor, even though 15 years have passed establishing its importance.

 

Criterion tasks

The majority of summarization studies have examined the effects of summarization on either retention of factual details or comprehension of a text (often requiring inferences) through performance on multiple-choice questions, cued recall questions, or free recall. Other benefits of summarization include enhanced metacognition (with text- absent summarization improving the extent to which readers can accurately evaluate what they do or do not know and improved note-taking following training.

Whereas several studies have shown benefits of summarization (sometimes following training) on measures of application, others have failed to find such benefits. One week after learning, students who had summarized performed no differently than students in a control group who had only read the passages in answering questions that tapped a basic level of knowledge (fact and comprehension questions). Students benefited from summarization when the questions required the application or analysis of knowledge, but summarization led to worse performance on evaluation and synthesis questions.

Across studies, results have also indicated that summarization helps later performance on generative measures (e.g., free recall, essays) more than it affects performance on multiple- choice or other measures that do not require the student to produce information. Because summarizing requires production, the processing involved is likely a better match to generative tests than to tests that depend on recognition.

Finally, concerning test delays, several studies have indicated that when summarization does boost performance, its effects are relatively robust over delays of days or week. Similarly, benefits of training programs have persisted several weeks after the end of training.

 

Implementation Issues

Summarization would be feasible for undergraduates or other learners who already know how to summarize. For these students, summarization would constitute an easy-to-implement technique that would not take a lot of time to complete or understand. The only concern would be whether these students might be better served by some other strategy, but certainly summarization would be better than the study strategies students typically favor, such as highlighting and rereading. Relatively intensive training programs are required for middle school students or learners with learning disabilities to benefit from summarization.

 

Overall Assessment: Low Utility

On the basis of the available evidence, we rate summarization as low utility. It can be an effective learning strategy for learners who are already skilled at summarizing; however, many learners (including children, high school students, and even some undergraduates) will require extensive training, which makes this strategy less feasible. Although summarization has been examined with a wide range of text materials, many researchers have pointed to factors of these texts that seem likely to moderate the effects of summarization (e.g., length), and future research should be aimed at investigating such factors. Finally, although many studies have examined summarization training in the classroom, what are lacking are classroom studies examining the effectiveness of summarization as a technique that boosts students’ learning, comprehension, and retention of course content.

 

References

 

Bretzing, B. H., & Kulhavy, R. W. (1979). Notetaking and depth of processing. Contemporary Educational Psychology, 4, 145–153.

Bednall, T. C., & Kehoe, E. J. (2011). Effects of self-regulatory instructional aids on self-directed study. Instructional Science, 39, 205–226.

Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychological Science in the Public Interest, 14(1), 4–58. https://doi.org/10.1177/1529100612453266

Garner, R. (1982). Efficient text summarization: Costs and benefits. Journal of Educational Research, 75, 275–279.

 

 

Share this
15 Jul 2017

Improving Student Learning with Effective Learning Techniques: Elaborative Interrogation (Part 1)


The achievement gap among students is widening, although there are major strides in the educational systems to bridge the gaps. From my experience as an educator and educational leader, one of the chief factors affecting student achievement is learning techniques. I am talking about the learning techniques that can be reasonably taught to students so that they can independently use it in the same or different contexts at a later date. Many students use ineffective learning techniques that if trained with more effective one can improve their achievement. Many teachers help students to use ineffective learning techniques because they do not know about effective techniques due to their ubiquity (Dunlosky et al., 2013).


A comprehensive review of the literature by Duosky et al. (2013) offered  recommendations for the utility of learning techniques to improve educational outcome. The review yielded 10 learning techniques that are labelled as low utility, medium utility, or high utility. The utility level (degree and scope of effectiveness) was based on the generalizability (educational contexts)  and promise for improving student learning.


In this post series, I will be discussing each learning technique in terms of

  1. General description of the technique and why it should work.

  2. How general are the effects of this technique?

  3. Effects in representative educational contexts

  4. ssues for implementation

  5. Overall assessment


The 10 learning techniques are


2017-07-15_14-05-34Dunlosky et al. (2013


The authors identified generalizability of these techniques’ impact on four categories of variables:

  1. materials

  2. learning conditions

  3. student characteristics

  4. criterion tasks


The authors also stressed the importance factual knowledge not as an ultimate objective but as a prerequisite for deep learning in a subsequent stage – one thing that the new fad into critical thinking in education has overlooked. Therefore, improving student retention of knowledge is essential for reaching other learning targets. They state that “if one does not remember core ideas, facts, or concepts, applying them may prove difficult, if not impossible”.


So, let’s begin with the first learning technique in this post.



Elaborative Interrogation

Explanatory questioning is extremely significant to promote learning, an ample body of evidence suggests. In particular, research has shown that answering “Why?” questions -embedded in elaborative interrogation and self-explanation techniques- can facilitate learning.



Description and Why it should work

Elaborative interrogation , such as asking “Why wasn’t action performed?”,  boosts memory recall. The key to elaborative interrogation  is “prompting learners to generate an explanation for an explicitly stated fact.”

A typical format was followed in most studies for EI prompting “Why would this fact be true of this [X] and not some other [X]?

The predominant conceptual report of elaborative interrogation effects is that elaborative interrogation enhances learning by supporting the integration of new information with existing prior knowledge.




Generalizability

Learning conditions

Although most studies have involved individual learning, elaborative-interrogation effects have also been shown among students working in dyads or small groups.

Student Characteristics

Elaborative interrogation can be generalized to all learners however the extent to how it affects young learners is not clear. Student prior knowledge has significant impact on the EI strategy.

2017-07-15_13-35-47.png



Effects in Educational Contexts

Mostly,  elaborative interrogation enhance learning in representative educational contexts with few studies conducted outside a laboratory setting. One particular study (Smith et al., 2010) conducted a study on undergraduates enrolled in a Biology course. The study was situated during class meetings in the adjoining lab section. Students completed an assessment of verbal ability and prior-knowledge exam over relational but indistinguishable material to the target one.

In the ensuing weeks, learners were given long and complex texts taken from a chapter in the textbook. For 50% of the learners, 21 EI prompts were “interspersed” throughout the text “roughly one prompt per 150 words” , each incorporating a paraphrased statement from the text followed by “Why is this true?” . The other students were only instructed to study the text on their own pace, without any prompts. El students then completed a T/F questions about the material (none were the same as the EI prompts). Performance was better for EI groups than control groups 76% versus 69%, “even after controlling the prior and verbal ability”.




Implementation Issues

There are two advantages to EI :

1- It requires minimal training for students to learn it. Teachers can start with EI prompts interspersed in the text, or text explanation, and gradually let the students come up with their own EI prompts.

2- The EI is “reasonable with time demands”. It does not take a lot of time on part of the teacher to prepare the prompts at the outset nor training the students to derive their own EI.

However, EI is  limited to “discrete factual statements”. It is not clear to what one should ask the why questions for more intricate outcomes. It work great with  fact lists but elaborating on facts incorporated in lengthier texts requires teachers to guide students on the kind of content to focus on to be productively executed.



Overall Assessment: Medium Utility

The authors assessed EI as medium utility primarily because of it generalizability issues. Studies suggest that it is most effective with factual knowledge and especially with students who have low domain knowledge. Also, benefits for comprehension and long delays need more research and is not clear in earlier studies.

Next post will discuss Self-explanation learning technique.




Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving Students’ Learning With Effective Learning Techniques: Promising Directions From Cognitive and Educational Psychology. Psychological Science in the Public Interest, 14(1), 4–58. https://doi.org/10.1177/1529100612453266

Smith, B. L., Holliday, W. G., & Austin, H. W. (2010). Students’ comprehension of science textbooks using a question-based reading strategy.

Share this
14 Jul 2017

A Science Museum That Makes Learning Overpoweringly Attractive for Kids: Schools, Take Note !

 

“We personalize learning all the time, we just don’t call it that,” says special education teacher Gina Tesoriero who has been teaching middle schoolers for over a decade. “When you give students open-ended challenges or design prompts, they actually personalize it themselves, bringing in their own interests and coming in at the level that is best for them.” Tesoriero has developed this belief over the past 10 years in the classroom—and she attributes much of it to her involvement with the New York Hall of Science (NYSCI).

“We want to know what you find compelling; what problem you think is worth solving; what you want to do or make. And then provide a space where that can happen.”

Douglas Moore

In 2010, Tesoriero and her colleague Amanda Solarsh, a middle school science teacher, stumbled across an opportunity to write curriculum at NYSCI. They were immediately taken with the museum’s learning model and wanted to incorporate elements of it into their classrooms at Simon Baruch Middle School 104. The following year, the duo participated in the Verizon Design Lab Fellowship, an opportunity for teachers to contribute to the creation of Design Lab, an interactive exhibit spanning two floors with activities that invite visitors to exercise problem-solving skills and develop solutions to engineering and design challenges.

 

Design_Lab-1499872148

Design Lab, Image Credit: NYSCI

The fellowship inspired Tesoriero and Solarsh to start an elective STEM course for seventh graders at their school. The course—developed to build 21st century skills like problem solving and innovative thinking—has scaled to two to three classes per grade level. Over the years, the teachers have participated in curriculum development, design labs and field trips, which have influenced the course and their practice.

The museum’s project-based, experiential, learner-centered approach isn’t revolutionary for K-12 education—in fact, many schools integrate elements of these approaches into their instructional model. But without the stresses of assessment and resource constraints, NYSCI is able to experiment and iterate. Douglas Moore, Vice President of Digital Education Strategy & Business Development at NYSCI says teachers visiting the museum with their students frequently make comments like, I’ve never seen those two work together so well or I’ve never seen her focus so much. “That’s because no one ever failed at a science museum,” he says.

According to Moore, getting someone to stop at your exhibit for even three minutes is a big win in the museum world. At NYSCI, visitors often stop to explore an exhibit for 30-45 minutes. Though this may not be optimal for museum flow, it begs the question: what can schools learn about engagement and personalization from this type of informal learning institution?

 

What can schools and teachers learn from NYSCI?

NYSCI, born at the 1964 World’s Fair in Corona, NY, is on a mission to put its visitors at the center of each hands-on learning experience. Originally exhibiting a collection of galleries sharing the potential of science, technology and space exploration, it is now home to over 450 interactive displays and a number of art and science exhibits rooted in experiential learning and the design, make, play approach.

 

NYSCI instructor Reid Bingham works with a class in the Maker Space,
Image Credit: David Handschuh/NYSCI

 

The NYSCI team is constantly asking itself: what is our role in education as an informal learning institution? “Our goal is to offer a very low barrier to learning—like a playful invitation,” says Moore. “We want to know what you find compelling; what problem you think is worth solving; what you want to do or make. And then provide a space where that can happen.”

Educators are part of NYSCI’s intended audience and there are a number of ways they can access the museum. Teachers can bring their classes to visit for open-ended field trips or scaffolded sessions designed around a particular challenge that needs to be solved, and can participate in professional development opportunities.

 

Field trips offer educators an opportunity to experience human-centered learning first-hand. Tesoriero reflects that some of the most engaged students during field trips were those who struggled the most in class. She notes that the greatest challenge with museum visits is finding a balance of holding students accountable for learning, while giving them space to explore what they are interested in, at their own pace.

 

For Tesoriero, a key part of that balance are NYSCI’s teenage “explainers,” a community of high school students participating in a youth development program with NYSCI called the Science Career Ladder. Explainers are not only experts on a particular exhibit or display, but are also skillful at supporting visitors to take control of their own learning and discover things on their own. These explainers are peppered throughout the museum and are often found with hands behind their backs asking open-ended probing questions to museum-goers. “They’re well trained and know a lot. I’ve learned a lot about how to help students discover things without telling them anything,” Tesoriero says.

 

Teenage Explainers, Image Credit: NYSCI

 

So what does it look like when a teacher adapts pieces of a museum’s learning model into the classroom? It can take shape in a number of ways. A museum might provide inspiration for resources and materials, inform lesson and unit design or influence philosophies of teaching and learning.

  1. Replicate an Activity: During a field trip, Solarsh’s students took part in a challenge to design and build a structure using wooden dowels that could provide shelter to 10 people after a natural disaster. Solarsh later purchased smaller dowels and replicated the activity in her classroom but with mini models, aligning it to her current civil engineering unit called “Scaling Structures.”
  2. Real-World Problems: Inspired by the challenge-based activities at NYSCI, Tesoriero developed a lesson back in her classroom that asked students to think about things that bothered them about eating and cooking and to design a utensil that could solve it. Students built prototypes of thermometer-spoons and cups that change color as the temperature of a liquid rises and falls.
  3. Empower Students to Make Meaningful Change: During a “Shark Tank” unit, Solarsh asked students to consider real-world issues they wanted to solve and design and present a solution for feedback. While she encouraged her students to follow their hearts and tackle large-scale problems like global warming, she also worked with students to make sure problems were focused so that students could get a sense of how individuals can affect change. One student designed and pitched an idea for lung-cancer detection and later found out that it aligned with what professionals are researching in the field.

The museum loves when classes come to visit, but Moore cautions against teachers trying to make their classrooms just like a science museum. “It’s not realistic,” he says. “There are resource constraints.” That’s why NYSCI takes PD so seriously, and is working hard to develop resources that teachers and learners can use outside the museum.

Moore explains that NYSCI’s biggest luxury is the ability to ask the question, “How do you make a topic irresistible so kids can’t turn away first, and then figure out all of the other stuff later?”


Expanding reach beyond museum visitors

Getting outside of the classroom can offer the opportunity to explore non-traditional methods of teaching and learning—but not everyone can get to NYSCI. Moore’s team spends a lot of time considering how to support educators, students and families that can’t make the trip to the museum.

“We want to scale access to these learning experiences to reach the folks we assume will never come—the kid in Jakarta, the teacher in Texas,” Moore explains. A major priority is building tools that make it possible for people to participate in some of these learning experiences digitally. “Because we don’t have to be adopted by every teacher, we’re able to make aspirational products that show what is possible—and to work with teachers to make them implementable in a variety of settings.”

In 2015, NYSCI’s first foray into this field was developing Noticing Tools, a set of five apps based on Design Lab that help students tackle math through selfies, video and building 3D models. The apps were prototyped in Tesoriero and Solarsh’s classes. The museum is currently in conceptual stages of its second initiative: a mobile game based on the Connected Worlds exhibit, an immersive ecosystem simulation for learners of all ages located in the Great Hall at the museum. The exhibit puts each learner at the center of a massive environment where even the museum’s youngest visitors can explore complex topics like sustainability, systems thinking and how actions have both short and long-term consequences.

Straddling magic and science, it challenges learners to manage a limited water supply and balance the needs of all living beings in six, interconnected digital biomes: wetlands, reservoir, jungle, grasslands, river valley and desert. Visitors can raise and lower their hands to plant seeds and move a set of physical logs to divert water from a 38-foot-high digital waterfall to an environment that needs it. Every decision made and every action taken impacts the environment.

The game will not try to replicate the exhibit. The goal is to design an open, online simulation game where players can build code and algorithms that have an impact on the ecosystem. With official launch over a year away, there are a lot of decisions to be made, but a core element of the game will definitely be to build upon intrinsic motivation rather than extrinsic through gamification.

“‘I want to go deeper but the bell just rang.’ That’s what we want,” Moore says. Users won’t need to take a test to prove they are learning because the evidence will lie in what they have built. This may not fit the traditional instructional model but NYSCI isn’t building a game to fit into schools, they’re building a game to develop motivation through engagement.


The role of informal learning institutions in K-12 education

Society often turns to school leaders, educators, curriculum experts or the world of academia to propose innovative learning models when current practices fall short. But school leaders and educators face systemic pressures and budget challenges that can make it challenging to question the status quo and experiment with new ways to teach and learn. Perhaps informal learning experiences that take place outside of the classroom deserve more attention.

Without the stress of assessment, promotional criteria and the need to constantly provide evidence of progress, informal learning institutions like museums might just be able to make learning even the most complex ideas irresistible.

This blog post was first published on Edsurge

 

 

 

 

Share this
03 Jul 2017

Forget Bloom’s: Here’s to SOLO teaching

During my conversations, interactions, designing, and planning with  teachers and lead teachers in the past decade, one obscure thing stands out in their minds: Bloom’s Taxonomy of cognitive process. This is what they articulate knowledge of. Many may have heard it in the staff room, been exposed to it in  professional development workshops, read it online or in a reference book, or perhaps even studied it during their college years. Many also may have used Bloom’s cognitive nouns and verbs to guide  their lesson planning, instructional practice, and even their assessments. Still, few know that Bloom’s Taxonomy has been updated in 2000. And very few know about Bloom’s knowledge dimensions (factual, conceptual, procedural, and metacognitive). Whatever their level of knowledge of Bloom’s taxonomy, teachers recognize it directly and can even relate their teaching strategies if asked to categorize their practice and assessment.

This is really exciting as it holds real potentials to improve students achievement, but in the education domain one needs to know what works well and what does not work so well, in practice. If teachers want teaching clarity, that is making learning targets and success criteria clear for learners and teachers themselves, if teachers want learners to take more control over their learning,  and if teachers need to systematically use differentiation in their teaching, the taxonomy needs to be clear for both teachers and learners. The teacher, the learner, the tasks, and the assessment should all be clearly informed by the taxonomy.  This clarity is where Bloom’s taxonomy fails. The levels of cognitive processes in Bloom’s taxonomy, and their respective action verbs do not help teachers set clear, measurable learning targets, do no help teachers set learning activities that can meet the learning targets, and do not help learners recognize and articulate the cognitive processes they are involved in. Finally, Bloom’s taxonomy does not provide a whole school framework and common language to systemize instructional routines and assessments, including learner self-assessment. I have rarely, if ever, seen teachers who have designed, planned and delivered lessons with clarity informed by Bloom’s, nor have I seen learners who clearly know what cognitive effort a task entails or success criteria it needs in terms of Bloom’s. Pam Hook  says:

The taxonomy was published in 1956, has sold over a million copies, has been translated into several languages, and has been cited thousands of times.

The Bloom taxonomy has been extensively used in teacher education to suggest learning and teaching strategies, has formed the basis of many tests developed by teachers (at least while they were in teacher training), and has been used to evaluate many tests.

It is thus remarkable that the taxonomy has been subject to so little research or evaluation.

Most of the evaluations are philosophical treatises noting, among other criticisms, that there is no evidence for the invariance of these stages, or claiming that the taxonomy is not based on any known theory of learning or teaching.

 

The SOLO taxonomy (Structure of Observed Learning Outcomes),devised by Collis  Biggs (1982), is divided into several levels  produced by students in terms of their complexity. The name itself reveals its function. The taxonomy is a structure, that is it has a form, and this form permeates throughout all knowledge levels. The taxonomy focuses on clarity since it seeks to make the learning outcomes observable by teachers and learners, unlike Bloom’s cognitive taxonomy which was devised for educational administrators.

The following is taken from Pam Hook’s wiki “The Learning Process – How Do You Know You are Learning?”

  • At the pre-structural level of understanding, the student response shows they have missed the point of the new learning.
  • At the uni-structural level, the learning outcome shows understanding of one aspect of the task, but this understanding is limited. For example, the student can label, name, define, identify or follow a simple procedure.
  • At the multi-structural level, several aspects of the task are understood but their relationship to each other, and the whole is missed. For example the student can list, define, describe, combine, match, or do algorithms.
  • At the relational level, the ideas are linked, and provide a coherent understanding of the whole. Student learning outcomes show evidence of comparison, causal thinking, classification, sequencing, analysis, part whole thinking, analogy, application and the formulation of questions.
  • At the extended abstract level, understanding at the relational level is re-thought at a higher level of abstraction, it is transferred to another context. Student learning outcomes at the extended abstract level show prediction, generalisation, evaluation, theorising, hypothesising, creation, and or reflection.

 

solo_taxonomy

 

Here’s a newer representation of SOLO using the house as a metaphor.

solo-houses

 

SOLO included declarative and functioning learning verbs

image

Source: Hook (2011)

 

SOLO verbs are easy to align learning targets with an achievement standard

image

 

SOLO can also be used codified for student self-assessment, linking student cognitive level to the task requirement.

image

source: Hook (2011)

The above are few sample of many, on how SOLO can be easily adopted by teacher and students. IT creates a common school language and framework for instruction, learning, and assessment.

 

Pam Hook writes a succinct Critique of Bloom’s Taxonomy and details advantages of SOLO model over Bloom’s :

Advantages of the SOLO model for evaluation of student learning
    • There are several advantages of the SOLO model over the Bloom taxonomy in the evaluation of student learning.
    • These advantages concern not only item construction and scoring, but incorporate features of the process of evaluation that pay attention to how students learn, and how teachers devise instructional procedures to help students use progressively more complex cognitive processes.
    • Unlike the Bloom taxonomy, which tends to be used more by teachers than by students, the SOLO can be taught to students such that they can learn to write progressively more difficult answers or prompts.
    • There is a closer parallel to how teachers teach and how students learn.
    • Both teachers and students often progress from more surface to deeper constructs and this is mirrored in the four levels of the SOLO taxonomy.
    • There is no necessary progression in the manner of teaching or learning in the Bloom taxonomy.
    • The levels can be interpreted relative to the proficiency of the students. Six year old students can be taught to derive general principles and suggest hypotheses, though obviously to a different level of abstraction and detail than their older peers. Using the SOLO method, it is relatively easy to construct items to assess such abstractions.
    • The SOLO taxonomy not only suggests an item writing methodology, but the same taxonomy can be used to score the items. The marker assesses each response to establish either the number of ideas (one = unistructural; _ two = multistructural), or the degree of interrelatedness (directly related or abstracted to more general principles). This can lead to more dependability of scoring.
    • Unlike the experience of some with the Bloom taxonomy it is relatively easy to identify and categorise the SOLO levels.
    • Similarly, teachers could be encouraged to use the ‘plus one’ principle when choosing appropriate learning material for students. That is, the teacher can aim to move the student one level higher in the taxonomy by appropriate choice of learning material and instructional sequencing.

Want more? Here is a link on  Problems with Bloom’s Taxonomy (Invalid, unreliable, impractical)

 

Want to dive into SOLO model? Check out Pam Hook’s Website. Start with these two introductory books:

SOLO Taxonomy: A Guide for Schools Bk 1: a Common Language  by Julie Mills and Pam Hook

SOLO Taxonomy: A Guide for Schools Bk 2 by Pam Hook Julie Mills

 

 

Share this

© 2017 Eductechalogy. All rights reserved.

Click Me