Phase 2 Impelentation and Findings
In implementing Phase 2, I used Lee’s work with concrete learning, pictorial learning, and abstract learning to shape my lessons . In week 1 of phase 1, I isolated the vocabulary and front loaded it for the students so that when they saw it in the context of the sections it would not be unfamiliar. The issue with this was that because I was presenting the vocabulary out of context, the students were not able to relate to it. They couldn’t internalize the vocabulary and buy into it because they had no reason to. The question of why went unanswered. In week 2, I used a more conventional style, emphasizing vocabulary when it came up within the lessons. In this case, the question of why was answered. By using the vocabulary in context, the students bought into it. Having the context also made it easier for the students to recall because they had a pathway to the meaning. Another intriguing trend I only noticed once I triangulated my data, was that my students were far more confident when they had a picture to use, be it for solving a problem, defining the vocabulary or making sense of the concept. I took this as a guide in building my lessons for phase 2. I left the vocabulary in context, and emphasized it with pictures and diagrams to give the students a visual clue to form another pathway to meaning. My lessons played to my students’ strengths because many of them are visual learners. In fact, 75% of respondents to a student feedback form said that having a picture helped them solve a problem. I also designed my activities and tests to push the students more toward abstract thinking. Rather than just asking the students to regurgitate the concept or formula, I would ask them given several parts and the result of the formula, to solve for the missing piece. For instance, if the area of a circle is 27 feet, what is the radius? The formula for the area of a circle is A=πr^2, so the students were tasked with using the area to solve for r. I started in phase 1 with very straightforward problems on the test, just asking them to regurgitate formulas and concepts and moved through to the more complex type of questions. My tests were also designed to check how visual my students are in their learning. I accomplished this by designing half of the problems with pictures and half without. Those problems without pictures are comparable in scope and difficulty to the problems with pictures to give a more true measure of how students use pictures and how the presence or absence of pictures affects student performance. The truest measure of this would be to use the same problem and present it both with and without a picture, but this is somewhat unrealistic because it would be too easy to just copy your work from the first one you do and would thus present lurking variables into the situation. The difficulty of the test, as I described above, has increased steadily from the first test I gave to the last test. One big design element of my assessments was that half of the problems on each test had pictures accompanying them and half did not. This was to get more robust data points by comparing students to themselves. However I found that the students drew their own pictures with higher frequency than not on problems that did not have pictures. They also made effective use of pictures when present, with a per-problem average of 68.75% of the students using the given picture to solve the problem. By per-problem average, I mean that I looked at each problem where a picture was given and checked the percentage of students that used the picture and the took the average of those percentages.
1) The students had more and longer recall of the vocabulary when they had pictures to associate with the vocabulary. On the tests, students consistently performed better on the questions with pictures. In class, they demonstrated more confidence in discussing the vocabulary and problems when they had pictures to refer to. It seems that the pictures acted as memory jogs, giving the students a more familiar pathway to the knowledge of the vocabulary and concepts. Rather than trying to picture it in their minds, they had a tangible tool to organize their thinking, freeing their thought processes to focus on calculating and solving the problem. Being visual learners, these students’ sense making abilities are tied to pictorial analysis. They uses pictures as a key tool in processing problems and concepts. This goes back to my primary finding from phase 1, that my students are visual learners.
2) Real-world problems, including those involving the students themselves, caught the students’ interest and made it easier for the students to access the content and knowledge. This can be seen in the test results. The first test of phase 1, with no real world problems and being the easiest, had 77% of students scoring a B or better whereas the last test which was the most difficult they had seen and was all real world problems, had 87.5% of students scoring a B or better. I used problems such as separating a tortilla into portions for ingredients for a burrito, a dartboard, a stop sign, and a drum kit to give the students real world connections to work with.
3) My students felt a lot more comfortable using the vocabulary with each other as phase 2 progressed, as noted by a 32% increase from week 1 to week 3 in general mathematical discussions among students and in particular a 30% increase from week 1 to week 3 in academic vocabulary related conversations. The change from week 1 to week 2 and from week 2 to week 3 matched for academic vocabulary conversations, but the increase in general mathematical conversations was most notable between weeks 1 and 2.
For a hypothetical phase 3, please check the Future Steps page.
1) The students had more and longer recall of the vocabulary when they had pictures to associate with the vocabulary. On the tests, students consistently performed better on the questions with pictures. In class, they demonstrated more confidence in discussing the vocabulary and problems when they had pictures to refer to. It seems that the pictures acted as memory jogs, giving the students a more familiar pathway to the knowledge of the vocabulary and concepts. Rather than trying to picture it in their minds, they had a tangible tool to organize their thinking, freeing their thought processes to focus on calculating and solving the problem. Being visual learners, these students’ sense making abilities are tied to pictorial analysis. They uses pictures as a key tool in processing problems and concepts. This goes back to my primary finding from phase 1, that my students are visual learners.
2) Real-world problems, including those involving the students themselves, caught the students’ interest and made it easier for the students to access the content and knowledge. This can be seen in the test results. The first test of phase 1, with no real world problems and being the easiest, had 77% of students scoring a B or better whereas the last test which was the most difficult they had seen and was all real world problems, had 87.5% of students scoring a B or better. I used problems such as separating a tortilla into portions for ingredients for a burrito, a dartboard, a stop sign, and a drum kit to give the students real world connections to work with.
3) My students felt a lot more comfortable using the vocabulary with each other as phase 2 progressed, as noted by a 32% increase from week 1 to week 3 in general mathematical discussions among students and in particular a 30% increase from week 1 to week 3 in academic vocabulary related conversations. The change from week 1 to week 2 and from week 2 to week 3 matched for academic vocabulary conversations, but the increase in general mathematical conversations was most notable between weeks 1 and 2.
For a hypothetical phase 3, please check the Future Steps page.