Google Drive Lab Report Workflow

This year I've rolled out using Google Drive for all Physics lab reports. Several people have asked me what this looks like, so I thought I'd share. Feel free to suggest a better/easier methodology- this is something that's come together based on how I know how to use Google Drive, and I certainly don't know all the ways to use Google Drive.

A big debt is owed to Katrina Kennett, whose posts and EdCamp Boston sessions on using Google Drive for paperless grading inspired my use, and for Frank Noschese, whose lab rubric I borrowed from heavily.

The setup

1- Creating shared folders. As soon as I get a finalized class list and my students' email addresses, I set up shared assignment folders in Google Drive for each student in my Physics class. This is a folder that is only shared between the individual student and myself, so anything I put into the folder they can see and vice versa.

Here's what it looks like for me in Google Drive:

Shared Assignment Folders in Google Drive

It can be a tedious process to individually create individual folders for each student. Fortunately, you don't have to- there's a Google Script called gClassFolders, that will automatically create folders for all of your students from a spreadsheet with your students' information. I won't go into detail here about how to setup gClassFolders, as the official site does an excellent job walking you through the process.

2- Share the rubric. I created the lab report rubric in a Google Spreadsheet, then I make a copy of the rubric for each student, and share with them, and place the rubric in their individual folders. Again, this could be a tedious process. Fortunately it isn't, thanks to Doctopus. Doctopus will make copies of the rubric for each student, share it with that student, and put it into their GDrive assignment folder. Super easy.

To use Doctopus, you'll just need a spreadsheet with students' names and email addresses (which you probably already have from using gClassFolders in step 1), and then it'll walk you through your sharing and naming options. Again, I'll forgo the lengthy explanation of using Doctopus, because the official site has you covered.

At this point, when each student signs into GDrive, they'll see their shared folder, with a spreadsheet titled, "Josh- Lab Report Rubrics," for example.

A student's view of the assignment folder

Now we're ready for some student lab reports.

Google Drive in Action

3- Students write lab reports. In lab, students record their data in lab notebooks, graph their data using LinReg, and discuss their results in a post-lab Whiteboard Meeting.  For their formal lab report, they create a Google Doc and type up their lab report. For graphs, they take screenshots of the graphs, and add them to the lab report as an image.

When they have finished the lab report, they drop it into their Physics Assignment folders, where I can then see it and have permissions to edit the lab report.

4- Scoring. Since I am able to edit their lab reports, I leave comments directly on their lab report, as shown below.

Comments on a Google Doc lab report.

A nice feature of Google Docs is that students receive notifications when I leave a comment, so they know right away when I've commented on their lab report.

At the same time I'm commenting on a student's lab report , I'm filling out the Lab Report Rubric & Checklist for their lab report. An important note: For each student, I'm filling out the lab checklist on my copy of the lab report rubric, and not the copy that I've already individually shared with students. This may seem odd, but in the end it means that students will have one spreadsheet that contains the rubrics for every lab that they've done. Below I'll explain how to make that happen.

5- Copying the rubric to students. After I've finished filling out the lab report rubric and checklist for a student's lab report, I select the "Copy to..." option on the tab of the spreadsheet:

The "Copy to..." location

A window then pops up asking me what Google Spreadsheet I'd like to copy it to. Since I've already created a lab report rubric spreadsheet for every student (in step 2), I just search for the student's first name, and select their lab report rubric spreadsheet:

Searching for student lab report spreadsheets

Once selected, the sheet is copied to that student's spreadsheet, where they can see it. On a student's spreadsheet, it'll show up as "Copy of [tab name]," as shown below:

Copied tab- Student view

Voila! Each student has one document that will contain every lab report rubric we do all year. This makes it easier for students to look back at previous lab reports and see where they made mistakes or needed more depth. It will hopefully also easily document their their growth over time.

Once I've copied a lab report rubric to the student's spreadsheet, I revert my copy of the rubric back to its original state so it's ready for me to start on the next lab report.

6- Rewrites. When a student turns in a less-than-stellar lab report, they're required to do a rewrite. A nice (and new) feature of Google Drive is the Activity Pane, which shows all the changes that are being made to documents in a specific Google Drive folder. As students work on their rewrites, I can check the activity pane for the folder with the students' shared folders and quickly see who has been updating their documents (and who hasn't).

Activity view in GDrive

Wrap Up

This is the first year I've used such a system, and it's definitely a work in progress. So far I've been quite happy with how the process has worked, and being able to create one document that contains the rubric for every lab report we do all year is a major plus.

Again, if you have any comments, questions, or suggestions for improvement, let me know. I'd definitely be open for suggestions that make the process even more streamlined.

Dear Skeptics' Guide: Standards aren't the solution

There's a widespread narrative regarding science education in the United States: It stinks. As a science educator, whenever I hear this two things happen. First, I get my my hackles all up. Second, I realize that despite my hackles I generally agree. I get my hackles up because I've spent a lot of time thinking about, planning, designing, and implementing a science curriculum that I feel has been pretty darn good. However, I recognize that the School System (I'm not picking at any one school district here, but instead at the entire system of schooling in this country) has not done a very good job of helping students to think and act like scientists.

Recently, while listening to the Skeptics' Guide to the Universe podcast (#343), I had my hackles raised. They discussed a recent article on io9 titled, "Your State Sucks at Science.1" This article discussed a report by the Thomas B. Fordham Institute that analyzed each state's standards on their "Content & Rigor" and "Clarity and Specificity." The results (summarized on the map below), showed that the vast majority of states didn't do so well. In fact, they did terribly.

Grades for States on Science Standards.

OK, that information doesn't shock, surprise, or upset me. Connecticut earned a not-so-respectable "C." I'd probably give the standards I've worked with (9th grade Integrated Science) a lower grade. Many standards are overly broad. Others are ambiguous. I agree with the Skeptics' Guide, io9, and the Thomas B. Fordham institute that improving these standards would be a good thing for science education.

So, why are my hackles still raised? Well...during the Skeptics' Guide to the Universe (SGU) discussion on the sorry state of science education, the general view was that poor standards are the crux of the problem (followed by poor teachers- more on this later). It was stated that poor standards will cause teachers to fail their students more often than the case would be if states had good standards. As anecdotal evidence of this, Dr. Steven Novella noted his daughter is receiving a sub-par science education at the Connecticut public school she is attending. Dr. Novella specifically described his two big problems with his daughter's science instruction:

  1. Inquiry and scientific thinking is not taught well at all.
  2. Real science education doesn't even really begin until the 7th grade. In grade schools they get virtually nothing.

I generally agree with these assertions. What really bothered me, however, was the discussion of why these problems exist. Here are some quotes from the discussion:

  • "Teachers don't quite grasp how science works."
  • "When the standards fail the teachers, the teachers will more likely fail the students."

Can you see where this is going? They never come right out and say science education stinks because our science teachers stink, but that idea is hovering just beneath the surface. I readily admit there are science educators who don't quite grasp how science works and who don't do a great job of designing science instruction. However, I believe this is more of a systemic issue than an individual teacher issue. Let's look at Dr. Novella's two assertions again:

  1. Inquiry and scientific thinking is not taught well at all.
    • Education is a high-stakes testing world these days. What's valued by our current schooling system are good scores on standardized tests, so effective teachers are labeled as those who help students earn good scores on standardized tests. However, it's can be tricky to assess inquiry and scientific thinking. The best way to assess these skills is to observe students performing scientific inquiry (or at least look at a portfolio of student work) to gauge the level of sophistication in scientific inquiry and thinking the student possesses. So, let's look at how Connecticut assesses science: The Connecticut Mastery Test (given grades 3-8) and the Connecticut Academic Performance Test (given to 10th graders) both assess "experimentation and the ability to use scientific reasoning to solve problems2." The CAPT science test includes 60 multiple choice and 5 open-response questions. In 5th grade, the CMT science test includes 36 multiple choice and 3 open-response, and in the 8th grade edition there are 55 multiple choice and 3 open-response3. Multiple choice questions- even well designed items- are a shoddy way to measure inquiry. Even the open-response questions that require several sentences to answer aren't a very good measure. Yet this is the system of assessment we value and this system of assessment doesn't value inquiry, so why are we surprised when inquiry and scientific thinking take a backseat in the classroom? The problem doesn't start with the teachers, it starts with our method of assessment.
  2. Real science education doesn't even really begin until the 7th grade. In grade schools they get virtually nothing.
    • Again, let's look at what our schools value by looking at what they assess: The CMT is given to every 3rd through 8th grader attending Connecticut public schools. Every year from the third grade and on, students are assessed in mathematics and language arts. Only 8th graders took a science CMT through 2007. Starting in 2008 the state added a science CMT to the 5th grade as well. Why is science instruction getting the short end of the stick? Because we're not assessing it. The focus on math and language arts isn't a bad thing, but it means that subjects not being assessed are being pushed to the side. This isn't the fault of the teachers stuck in this system- it's the fault of the system itself.

What's the solution?

I am not advocating for giving more science standardized tests. I have no problem with improving our science standards. However, unless we change the current methods of assessment I wouldn't expect to see much change. To learn scientific thinking and inquiry, students must be given time in class to explore ideas, rethink assumptions, and test their hypotheses. These things take a lot of class time- furthermore they deserve a lot of class time. Having lots of well written standards is generally a good thing, but it also means teachers are pressured to "cover" all the standards to the detriment of depth of understanding and student exploration.

Dear SGU, you are science educators yourselves, and I love most of what you do. However, I'd like you to think and talk more deeply about what good science education in schools looks like and whether that vision is being supported by the assessment methods employed by the states. A wise person once said, "What we assess defines what we value4" I'd add "How we assess defines what we value," as well. If we value inquiry and scientific thinking, our assessments should be more sophisticated- requiring students to actively demonstrate their understanding of how science works. These assessments would be expensive to design and implement but would more accurately reflect students' actual scientific knowledge and skills. It's not that I think the SGU hates teachers, but you do seem to be jumping on the political narrative that has been placing undue blame for poor education practices on the shoulders of teachers instead of including systemic forces that impact how and why teachers deliver instruction in the classroom.

  1. The discussion starts about 27 minutes into the episode and runs for 10 minutes on this topic. []
  2. See 2011 CAPT Interpretive Guide, p. 5. http://www.csde.state.ct.us/public/cedar/assessment/capt/resources/misc_capt/2011%20CAPT%20Interpretive%20Guide.pdf []
  3. Question information from the CAPT Program Overview 2012, p. 11, http://www.csde.state.ct.us/public/cedar/assessment/capt/resources/misc_capt/CAPT%20program%20overview%202012.pdf and Science CMT Handbook, p. 8, http://www.sde.ct.gov/sde/lib/sde/pdf/curriculum/science/science_cmt_handbook.pdf []
  4. This was a Grant Wiggins quote, I believe. []

3+ Quick- Birthday, (grading) scale matters, exposing climate fraud, debunking handbook

These aren't brand new items, as they're things I came across awhile ago and am just getting around to posting now. In addition, I realized that the anniversary of this blog just passed. My first post was published January 12, 2008. As I look back at my first posts, it's clear that I've come a long way (hopefully for the better)- in my location, in my career, and in my thinking. So, in celebration of the 4th anniversary of this blog, let me present you with the following interesting tidbits:

Scale matters (Rick Wormelli)


Thanks to the ActiveGrade blog for bringing this to my attention. I don't know how many times I've had discussions with other teachers on the topic of what constitutes fair and effective grading. Often the most heated topic (where I never made any headway) involved the giving out of zeroes for either missing or poorly done classwork. Rick Wormelli gives a great explanation of why grading scales matter- and specifically why zeroes are no good. It's long for YouTube at 8+ minutes, but it's worth it:

Exposing a climate science fraud (Ethan Siegel)


The post is ostensibly a take down of Judith Curry's claim's that recent studies and reports on the topic of climate change are "hiding the decline1." However, the real appeal of this post (for me) is how it so effectively describes how science and scientists work. He goes through the data, the uncertainties in measurement, and explains how exactly it is that scientists determine that some effect is real and not just a statistical fluke.

The Debunking Handbook (Skeptical Science)


Somewhat related, the Skeptical Science blog (one of the best places to find science-based information about climate science) released The Debunking Handbook a while ago and just recently updated it. The Handbook provides guidelines for communicating about misinformation and gives tips to avoid falling into common pitfalls. In their own words, "The Handbook explores the surprising fact that debunking myths can sometimes reinforce the myth in peoples' minds. Communicators need to be aware of the various backfire effects and how to avoid them..." The handbook is a free PDF download available at their website.

  1. "Hiding the decline" is the (totally false) idea that climate scientists are tweaking their graphs to make it seem like the Earth is getting warmer, when it really has been cooling the last decade (which it hasn't). Read the full article for more details. []

Pipe Insulation Roller Coaster Assessment

Welcome back. If you haven't joined us for the last two posts, let me recommend that you first read about determining rolling friction on the coaster and the project overview.

On to the assessment...

Assessment is extremely important. It explicitly informs students what things we value (and thus the things we value). If we assess the wrong things, students will focus on the wrong things. This can turn an otherwise excellent project into a mediocre project. For this post, I'll share two methods of assessment: First, the "old" method I used when I last taught physics (in 2008). Second, my updated assessment scheme that I'd use if I did this project again.

The old assessment strategy

Embedded below is the document I gave to students at the beginning of the pipe insulation roller coaster project. Most noticeably it includes a description of the assessment scheme I used way back in January of 2008.
[scribd id=73149530 key=key-2h4y3du7bm3b9wfvgt2g mode=list]


As you can see, I split the assessment of this project into two equal parts:

An assessment of the finished roller coaster

I wanted students to think carefully about the design, construction, and "marketing" of their coasters. I wanted them to design coasters that not only met the requirements, but coasters that were beautiful and interesting. Individual items being assessed under this rubric were weighted differently. For example, "Appropriate name of the coaster" was only worth 5%, while "Creativity, originality, and aesthetics" was worth 20%. Here's a link to the sheet I used when assessing this aspect of the coaster project.

An assessment of the physics concepts

In the embedded document above, you can see the breakdown of what items were being assessed. In my last post on pipe insulation roller coasters, you can see how students labeled their coasters with information on the marble's energy, velocity, and such along the track. Groups were required to turn in a sheet with the calculations they performed to arrive at these numbers. These sheets were the primary basis for determining whether students understood the physics concepts.

Problems

There are a lot of problems with the assessment scheme as described above. I'm not going to try to address them all, so here are a couple of the biggest issues:

  • Assessing coaster design
    • I'm a fan of elegant design. For this project I'm a fan of finished coasters that look well designed and exciting. That's why I included the first part of the assessment. I wanted to incentivize students to think about the design and construction of their coasters. In retrospect this is probably unnecessary. Students generally came into this project with plenty of intrinsic motivation to make their coaster the best in the history of the class. While I'd still stress the importance of quality design in the future, I'd completely cut this half of the assessment. Students already cared about the design of their coaster. If anything, awarding points for coaster design had an net negative effect. Especially because it doesn't assess anything related to the understanding of physics.
  • Assessing student understanding of physics concepts
    • As a normal part of working in a group while attempting to complete a large project in a limited time, students split up the work. Students are generally pretty smart about this in their own way. While I stressed that everyone in the group should contribute equally towards the calculations. Most groups would have the student who had the best understanding of the physics do most of the calculations. Why? Because it was faster. They needed to finish their coaster and just having the fastest person do the calculations meant more time for construction. While I generally knew when students in a group were adding very little to the calculations (and would assess them accordingly), on the whole this method didn't give me a good picture of each individual students' level of understanding. There were certainly students who skated through the project while minimally demonstrating their understanding of the energy and friction concepts involved.

The new assessment strategy

You've probably already picked up on a few of the improvements I'd make for this project.

  1. Use standards-based assessment. Standards-based assessment is an integral part of the classroom throughout the year- not just for projects. If you're unfamiliar with what this "standards-based" business is all about click the little number at the end of this sentence for plenty of links in the footnotes1. Here are a list of standards that would be assessed through this project:

    Content standards assessed

    • Energy
      • Understand and apply the law of conservation of energy.
      • Explain and calculate the kinetic energy and potential energy of an object.
      • Explain and calculate the amount of work done on and by an object.
      • Solve basic conservation of energy problems involving kinetic energy and potential energy.
      • Solve conservation of energy problems involving work and thermal energy.
    • Circular Motion
      • Solve basic circular motion problems using formulas.
    • Habits of Mind
      • Collaborate and communicate with others to meet specific goals.
      • Handle and overcome hurdles creatively and productively.

    The specific standards used can vary based on your specific implementation.

  2. No points for coaster requirements. As I mentioned earlier, it proved unnecessary to incentivize their coaster designs and meeting the basic requirements of the project. This decision also comes out of standards-based grading, which focuses assessment around, "Do you know physics?" instead of "Can you jump through the right hoops?" That isn't to say we don't talk about what makes a coaster "exciting" or "aesthetically pleasing" or whatever. It just means a student needs to demonstrate their understanding of the physics to earn their grade.
  3. A focus on informal assessment. Rather than heavily relying on a sheet of calculations turned in at the end of the project (and probably done lopsidedly by one or two group members) to determine if the group understands the physics, I'd assess their understanding as I walked around the classroom discussing the coasters and their designs with the students as they work on them. Starting with questions like, "Why did you make that loop smaller?," or "Where are you having trouble staying within the requirements?" can be used to probe into student thinking and understanding. The final calculations would still be a part of the assessment, but no longer the single key piece of information in the assessment.

On the whole I was very happy with this project as I used it in the past. As I've learned and grown as a teacher I've found several ways I can tweak the old project to keep up with the type of student learning I want to support in my classroom. If you have other suggestions for improvement, I'd be happy to hear them.

As a bonus, here's a student produced video of the roller coaster project made for the daily announcements. The video was made by a student who wasn't in the physics class, so there's a little more emphasis on the destruction of the roller coasters at the end of the project than I'd like. Kids. What can ya do?
[vimeo http://vimeo.com/32422278 w=500]

  1. Here are posts I've written about my experience implementing standards-based assessment. I'm not an expert, so let me also direct you my bookmarks related to standards-based grading, and some resources written by a couple people who are more expert: Shawn Cornally and Frank Noschese (who offers blog posts, a shared google doc foler, and a collection of bookmarked links). There are certainly other great resources out there, but these are a great starting point. []

Exams: SBG-style

The goal of any exam, ideally, is to assess how much students have learned over the course of a semester or school year. I changed the focus of grading in my classes from counting points to counting progress towards specific learning goals, I knew my exams needed to reflect that change as well.

This summer I had initially thought I might design some sort of alternate, performance-based exam that would mesh well with the tenets of standards-based grading. However, this year all exams for the same class were required to be exactly the same regardless of teacher. Since I'm currently one of four teachers who teach the 9th grade Integrated Science course and the only one using standards-based grading, I knew I had to take our common exam and make the best of it.

So, the exams had to have the same questions, but they didn't need to be in the exact same order, right? I reordered all the questions on the exam based on the learning goal they assessed.

Multiple choice section, SBG exam

This process uncovered several questions which didn't address any of the learning goals, so these "others" were grouped together to make their own section.

Overall, I wasn't thrilled with the exam, but I think it was quite good given the requirements it had to meet.

Assessment

Breaking down the exam into its composite learning goals allowed me to assess each learning goal on the exam individually. It took decently longer to grade the exams in this way, but it also provided me and my students with a wealth of information about their learning throughout the first semester.

I created a Google Spreadsheet that automatically calculated the individual scores for each learning goal and the overall exam grade. Once the grading was done, I shared each student's spreadsheet with them through Google Docs.

Below is an example of a filled out scoresheet (and here's a blank calculation sheet if you're interested):

Example Exam Calculation Spreadsheet

Details

Overall grades. You may notice I calculated two "overall" grades. I told students their overall grade on the exam would be the average of their scores on each learning goal (giving each learning goal equal weight), but I wasn't sure if that might result in some odd effects on the overall grade due to some flaw I hadn't planned for. As a check, I also calculated the exam's score "traditionally," or simply by dividing the total points possible by the total points earned. Interestingly these two scores were almost always ridiculously close to each other (for most students it was <1%). I'm not sure exactly what that means, but it was interesting nonetheless.

Unfinished long answer questions. The exam had 6 long answer questions and students were required to complete at least 4 of them. I had a few students who either skipped the long answer questions entirely or did fewer than were required. It didn't make sense to penalize any one learning goal for not doing all the long answer questions (since, after all, simply not doing the long answer questions didn't necessarily mean they didn't understand the content of the learning goals). However, I felt that there should be some penalty for doing fewer than required1.  As a result, I calculated what percentage one long answer question was of the entire exam and divided that by 2- which gave me 1.84% in this case. For each required long answer question that was not completed, I took 1.84% off their overall exam grade.

Spreadsheet-fu. I honed some serious "if-then" formula skills in the process- an area of serious spreadsheet-fu weakness before this spreadsheet. Despite the time it took me to figure out how to make the spreadsheet do what I want, I'm still pretty sure using the spreadsheet instead of calculating everything by hand saved me several hours. Plus, now I have another formula type under my belt.

Final thoughts

Perhaps unsurprisingly, my predictions about what learning goals would be problematic for students on the exam were dead-on. They were the same learning goals that more students struggled with during the course of the semester. There really weren't any surprises on the mid-term.

What then, is the purpose of an exam in a SBG classroom? Exams are meant to assess how well students know the material that has been presented throughout the semester. However, if I am regularly assessing students' understanding of learning goals throughout the semester is there any benefit to a final, summative exam? Most students' exam grades were eerily close to their grades for the rest of the semester2.

If we're doing SBG well, it seems to me the final exam is unnecessary. We should already have a good understanding of exactly what students know, so why bother with a big test at the end of the semester?

Should the exam in an SBG classroom be something totally different than what we've traditionally come to think of exams as being? Or should they just be done away with?

_____

  1. At first I really balked at penalizing students for not completing the required long answer questions. However, after thinking about it for a bit, I came to the conclusion that to some degree the decision of a student to skip one or more of the long answer questions  was indicative of a lack of understanding of the content at least to some degree.     []
  2. On average, the exam grades were just a bit lower than grades for the rest of the semester. I can rationalize that in several ways: additional anxiety due to it being an exam, or a less than perfect exam design, etc.     []

SBG: One Quarter Down

This Friday marks the end of the 1st Quarter of the school year. At this point I'm totally a SBG n00b. For the standard, "I can successfully implement standard-based grading into the 9th grade Integrated Science classroom," I'd rate myself at the "basic" level. I've got the basic idea, I've got the basic setup, it's going basically well, but it's a long way from where I hope it will be by the end of the year.

Reflections

Students don't get it

Students understand that their overall performance in class is based on their scores for the learning goals we've gone over in class. They understand that only their most recent score for each learning goal counts. Unfortunately they have at least 8 solid years of being conditioned point-grubbers. The whole concept seems totally foreign to their entire school experience. It saddens me that explaining to a student their grade is based on their actual understanding of the content draws a blank "I don't get it" look. I keep telling myself that by frequently explaining the basic tenets of SBG and sticking to my guns students will eventually reach the point where understanding smacks them upside the head and they spend the rest of the year walking around school demanding that all their teachers do it this way. However, I'd be willing to bet a big part of the problem is the fact that...

I don't get it

Well, I get it, but I'm not sure I get how to implement it. I'm not sure I get how to communicate it. I'm not sure what I'm doing day to day supports the "radical" mandate of SBG1. There have been several changes to my school life this year that have left me time-strapped and feeling I just don't have time to go through my curriculum with a fine-tooth comb and tweak it to fit the SBG mandate. Part of the issue is my understanding of...

Qualitative SBG

Many of the SBG Titans out there teach quantitative subjects such as Math or Physics. I'm teaching a much more qualitative 9th grade Integrated Science. Conceptually, I understand how SBG works within a qualitative course. On the implementation side I'm not as comfortable. Great inquiry-based activities focused on the life cycle of stars are a little trickier for me to design than those around the work-energy theorem. I'm not trying to cop out of providing a curiosity-rich learning environment here; some topics are just harder for me to design great stuff around. Which leads to the complication of the...

State curriculum

The Connecticut State Curriculum Standards for 9th grade Integrated Science aren't that bad. Sure, they're often poorly worded and overly expansive,2 but there are a lot of interesting and relevant topics in there. I'm not one to worry about skipping a standard or six, but there are people (generally the people that fill out my evaluations) who think it's best that I not miss any.

Yesterday I had a crazy daydream about a place where there weren't oh-so-specific standards for each class and I could really let students' questions and curiosity drive what we cover when. I get why we have state standards and think it's generally a positive thing, but I dislike their specificity. We keep forgetting to leave room for curiosity and the pursuit of interesting questions. I need to find a balance between keeping up with the other Integrated Science teachers and making sure I'm putting student learning at the forefront, which is much more difficult because I'm...

Going it alone

I'm the only teacher at my school using SBG. I've pitched it to my Integrated Science colleagues and explained its wonders to my principal, but they didn't seem too interested3. I'd like to work with them to puzzle through how we'll deal with the state standards while doing SBG, or share the effort of designing great activities and projects that keep curiosity and discovery at their center. Even trickier: we were given a mandate that our mid-term and final exams must be exactly the same. That wouldn't be a big deal if we were all on the SBG Express. Since I'm riding solo the common exams probably won't live up to my expectations of what an SBG exam should look like. It certainly won't be focused solely around the learning goals I've developed, which is a major bummer.

Some Questions

2nd Quarter

In the traditional points-driven system, the points simply reset to zero at the beginning of each quarter. Students start fresh. In my understanding, that doesn't really jive with the SBG system. At this point, I'm planning on bringing over all the learning goals and scores from the 1st Quarter into the 2nd and not reset student scores until the end of the semester. How do you SBG wizards out there handle this? I'm not sure if holding over grades from quarter to quarter is technically "allowed," which might make that decision for me.

Assessment routines

While there's no one right way to implement SBG, I'm always looking to make my implementation higher-impact while remaining easy to understand. Here's how things have gone down so far:

  • I give frequent small quizzes over a learning goal or two that we've been talking about in class.
    • If there is an obvious deficiency in student understanding, we take some time in class focused on the weaknesses and do an in-class reassessment later. If the vast majority of students understand the topic it becomes the responsibility of individual students to reassess before or after school.
  • I do frequent projects or activities that cover a couple to several learning goals. Usually there are at least a couple content-based learning goals and a few skill-based learning goals.
  • I've been pretty formal about letting students know when I'm assessing a learning goal. I'm not sure if this is the best method- especially for learning goals in the vein of, "I can effectively communicate and collaborate with others to complete a task." I'd like that to simply be an "always on" learning goal that can be assessed anytime they work in a group. However,  I'm not quite sure how to communicate that assessment in the midst of group work, or whether it'll cause a problem to not assess every student on that learning goal for each group activity. For example, it's easy to pick out students who aren't doing well on that learning goal while it often isn't as attention grabbing when they're doing well. As a result I worry about assessing the negative instances more than the positive, thus artificially driving that score down.

How do you handle "on the fly" assessment?

______

  1. "Learning is King"       []
  2. There are at least 5 standards I could envision being semester long courses by themselves. []
  3. On a positive note, my SBG implementation came up in a meeting where the Asst. Superintendent of Curriculum & Instruction was present and she seemed interested in hearing more.     []

Citations & tracked classes: SBG questions

We're now 8 days into the new school year & standards-based grading has officially been introduced and implemented (though we don't yet have much in the way of assessments in the book). I really like how the use of SBG has required me to rethink how I present a topic and how we spend our time in class1.

However, a couple issues have popped up where I could use a little guidance from some SBG-brethren (or sistren):

Problem 1: Citations & plagiarism

In the past, if students failed to cite their sources or plagiarized, I wouldn't accept their project/assignment/what-have you. I would give them an adequate amount of time to make the necessary changes and re-submit it without penalty, but if they didn't fix it up they wouldn't get credit.

As I was thinking through the SBG system, I realized that if I have a standard for properly citing sources and not plagiarizing information I could be opening a loop-hole. I did a twitter shout out on the issue, and the SBG-Jedi @mctownsley, responded to my question with a question:

Is citing sources an important issue you want all of your students to demonstrate?

Well, yes. I believe it's a very important skill to cite your sources- both for academic integrity and to point any readers toward your sources so they can read them and see if they agree with your interpretation of them. However, imagine a student really hates citations (let's face it, they are a pain) and decides to the play the system. They realize that as long as they use citations properly for the last assessment that requires them, they really don't need to do citations for any other previous assessments. This doesn't seem ideal.

My solution as of now: I have a standard for citations. In addition, if a student turns in a project or activity that is missing citations when it should have them or is plagiarized, then I'll give it back, tell them to fix it up, and not change any grades on any standards (except for the citation standard). While this technically leaves a loop-hole intact, I believe it'll prevent too much monkeying around.

Problem 2: Tracked classes

I teach 9th grade Integrated Science all day, every day. However, there are three(!) levels of Integrated Science: Honors, regular, and Foundations. Let's ignore issues with tracking students since it's an issue beyond my control at the moment2.

Should all Integrated Science classes share the same standards? Should achieving mastery be defined the same for all classes? My school weights honors classes more heavily (to prevent students taking low-level classes from becoming valedictorian, presumably), which seems to suggest there's a belief that the class requires less effort3.

My solution as of now: (1)The standards for all levels of Integrated Science are the same, but may be adjusted as I see necessary. If one level is showing a lack of knowledge I feel is important, I'll feel free to add a standard in for just that level (and vice-versa for removal of standards). I'm trying to be flexible and provide the best learning opportunities for all students. (2) I'm really not sure about this one. Right now I'm going to expect students in all levels to demonstrate similar levels of knowledge or skill to achieve mastery. Since I'm flexible on how much time I spend on standards in different classes, I'm willing to spend extra time if needed to get all students to mastery level.

Whatchoo think?

I know there are many people out there who have already dealt with similar issues. I'd love to hear your own solutions to these problems as well as insights into my "solutions as of now."

_____

  1. I really like the way it allows me to focus in on areas of student weaknesses and differentiate instruction with super-laser-guided-satellite-gps precision.     []
  2. For the record, I find it's 95% a bad thing- including some pretty serious (but never mentioned aloud) issues with minorities being over-represented in Foundations and under-represented in Honors. There's an unspoken message being given to our minority population...     []
  3. Not an assertion I agree with, but thems the facts.     []

SBG Express: Details

The basic idea of standards-based grading is simple: Grade students on their understanding of specific learning goals. It's the details of that implementation that are devilish. In honor of the "publish, then filter" idea, writing this post is my way of working through (and hopefully solidifying) those details.

What standards?

I've started making a list of standards. I keep oscillating between thinking, "These standards are way too specific!" to "These standards are way too broad!" I'm taking that as a sign that they're about where I want them. This is a list in progress. As of this typing the standards cover the first several mini-units of 9th grade Integrated Science. I'm open to any insights, questions, or comments you have concerning the standards. If you missed the subtle hyperlink earlier, CLICK HERE TO VIEW STANDARDS!

Grading

When the rubber hits the road, I need a specific way to calculate a student's letter grade at any point in time. Figuring this part out is spending more mental energy than anything else. An incorrect implementation might make SBG no better than old-fashioned grades by cumulative points- and in face could be worse. I'd like to avoid that.

  1. Each standard is worth 10 points.
    • Points translate directly to % and grades, so 9.5 = 95% = A
  2. The overall grade is calculated by averaging student scores on all the standards that have been assessed.
    • Some SBG'ers don't like the averaging method since some poorly understood standards might be covered up by a few well understood standards. Conjunctive scoring would get around this (Jason Buell gives a nice overview of conjunctive scoring here), but I worry that conjunctive scoring is a bit too "out there" for administrators, teachers, or students to get behind, and furthermore I'm not sure PowerSchool (our student information system) can handle it. I've put conjunctive scoring on the "possible future enhancements" list.
  3. Students may re-assess on any standard on any day.
    • Limits:
      • 1 standard per day, per student (the Cornally Corollary)
      • Students must know what standard they want to re-assess
      • Students can get help from me or re-assess, but not both on the same day (the Nowak Limit)
  4. Mid-terms and finals are summative
    • Meaning these grades can't change with reassessment. Total value of both combined is 20% of the overall course.
  5. I'll be using the SBGradebook along with PowerSchool to record & report student progress.
    • I'm not going to lie, I'm a little worried about how much time it'll take to enter grades in twice. However, the SBGradebook looks like such an exercise in graphy-awesomeness I couldn't not use it. Plus, it should help students track their own progress more effectively.

I'm pretty sure if you've written about SBG in the past 12 months you'll see something of your system here. Hopefully you view it as flattery and not me biting your awesome ideas.

I'm pretty sure writing this post helped me more than it will help any reader. I needed to hash out several competing ideas I had floating around my head. As always, if you see something glaringly obvious that will sink this SBG ship, let me know.

SBG Express: I've got a ticket to ride

I mentioned it in my last post, and I'm officially announcing it here. My ticket is punched and I'm on board the SBG Express1 for the 2010-2011 school year!

I've spent the last few weeks reading and rereading several teachers' explanations and reflections on standards-based grading (including, but not limited to Shawn Cornally, Jason Buell, Frank Noschese, Matt Townsley, and several others who will be mad at me for not giving them a shout out). The more I read, the more I knew that standards-based grading was something that in some sort of sideways, subconscious way I've been working towards implementing the last several years even though I didn't even know what "SBG" stood for until May of this year.

Here's my basic understanding of SBG to date:

  • Assessment and grades should accurately reflect student learning (not just student homework-turning-in abilities)
  • Instead of using cumulative-points-earned as the basis for student grades, use progress towards a set of "standards (or "learning goals", or "knowledge criteria," or "whatever you'd like to call them")."
    • These standards describe specific areas of knowledge or expertise that students should gain. For example, "I can explain the law of gravity and understand what factors affect the strength of gravitational force."
  • Grades in your gradebook should help students realize where their understanding is great and where it's lacking.
    • Knowing they flunked "Quiz: Chapter 7" isn't helpful. Knowing they got 6 out of 10 on "I can explain why stars transition from one stage to another as they progress through their life cycle" gives the student valuable information that allows them to focus their remediation.
  • A grade on a standard is not set in stone (until exam time). Students can re-assess on any standard at any point in the school year. Grades can go down if the student shows a lack of understanding later in the course.
    • This should allow a students' grade to more accurately reflect their actual learning rather than be punished for not learning something before a big test when they knew it by the end of the course. Likewise, the student who crams successfully for the big test then forgets it all should have a grade that better reflects actual understanding.

I know! Sweet, right?

Fortunately, I've been blessed with a personality that's totally fine jumping into a project without having worked out all the details ahead of time. Unfortunately, I'm going to have to explain this whole SBG thing to quite a few students, parents, teachers, et cetera, in just a few days.

Tomorrow I'll share what I've got so far in the "details" folder.

_____

  1. copyright, 2010, Shawn Cornally []