![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
I've noticed that, at least subjectively, my classes alternate almost day-to-day from better to worse to better and so forth. I teach twice/week, which might be very relevant if it were actually day-to-day alternation (i.e. stationary with period 2). However, it's not. So I have to wonder whether there is any real periodicity at all, or if there is a confounder which I think now is more likely.
My candidates for explanatory variable are [using the computer] and [reviewing material] which are themselves correlated. My reviews often involve some simple 'rithmetic and algebra on the blackboard which serves to focus my attention and also give the students something to fixate on. I'm a little sad about it though, for this reason: When I'm not reviewing these simple procedures, I do put some effort into graphical demos via the laptop, on "real" datasets. However I don't have a very streamlined way to integrate the demos yet, and they aren't as interactive as I'd like. I wish that I knew whether they were doing more harm and good for the class in their current form. I may ditch them anyway.
I think maybe the way to do it is to introduce the simple computation first just to have a concrete foothold for the students and then add the computer example later. Often the computer examples (being "real" datasets and also kinda small) are not as idealized as the textbook examples, thus requiring more familiarity to not get distracted by details. This is at odds to how I would look at them: a reality-grounded first-step into the problem domain (they are that, but only to me and I don't need an introduction anyways - it's incredible how self-negating the process of teaching needs to be).
This may not have a lot of general validity; it's all based on a reaction to today's lecture, which I had fun with and got some good interaction from the students for. We covered regression lines (3rd day or so, of 4) and specifically, after half the class reviewing the concept of a regression line, we computed the associated conditional standard error (given X) using the shortcut form $\sqrt{1-r^2}$. Throughout I used a semi-original example of someone retaking the SAT: they do well the first time (720/800); pay a lot of money for a tutor; and then perform the same the second time (with the population distribution the same too). The moral is that even though he did the same, that's still pretty impressive since he had to match his already strong performance from the first time around - the regression we used predicted he would go down from 720 to 640 whereas he kept it up and got another 720. This was a nice introduction if I say so myself, to computing the SD of the regression line, in order to informally test for significance of his second 720. Note, without the correct conditioning, it is significant at the 0.02-level whereas properly it is only at the 0.12-level. [Note, I used a correlation of about r=0.5 which I suspect is way too low, and was honest with my students about that as well as noting some of the difficulty of computing a correlation across not-entirely-like samples.]
And not a computer or dataset in sight; we only used the summary statistics since we're all comfortable with mean, SD, correlation coƫfficient, &c. As a result I was able to talk about the concepts (binormal; conditional normality; hints of homoscedasticity; I even got to relate the regression fallacy to my SAT example) with a bit of fluency. I suspect I may have been doing things backwards before...
My candidates for explanatory variable are [using the computer] and [reviewing material] which are themselves correlated. My reviews often involve some simple 'rithmetic and algebra on the blackboard which serves to focus my attention and also give the students something to fixate on. I'm a little sad about it though, for this reason: When I'm not reviewing these simple procedures, I do put some effort into graphical demos via the laptop, on "real" datasets. However I don't have a very streamlined way to integrate the demos yet, and they aren't as interactive as I'd like. I wish that I knew whether they were doing more harm and good for the class in their current form. I may ditch them anyway.
I think maybe the way to do it is to introduce the simple computation first just to have a concrete foothold for the students and then add the computer example later. Often the computer examples (being "real" datasets and also kinda small) are not as idealized as the textbook examples, thus requiring more familiarity to not get distracted by details. This is at odds to how I would look at them: a reality-grounded first-step into the problem domain (they are that, but only to me and I don't need an introduction anyways - it's incredible how self-negating the process of teaching needs to be).
This may not have a lot of general validity; it's all based on a reaction to today's lecture, which I had fun with and got some good interaction from the students for. We covered regression lines (3rd day or so, of 4) and specifically, after half the class reviewing the concept of a regression line, we computed the associated conditional standard error (given X) using the shortcut form $\sqrt{1-r^2}$. Throughout I used a semi-original example of someone retaking the SAT: they do well the first time (720/800); pay a lot of money for a tutor; and then perform the same the second time (with the population distribution the same too). The moral is that even though he did the same, that's still pretty impressive since he had to match his already strong performance from the first time around - the regression we used predicted he would go down from 720 to 640 whereas he kept it up and got another 720. This was a nice introduction if I say so myself, to computing the SD of the regression line, in order to informally test for significance of his second 720. Note, without the correct conditioning, it is significant at the 0.02-level whereas properly it is only at the 0.12-level. [Note, I used a correlation of about r=0.5 which I suspect is way too low, and was honest with my students about that as well as noting some of the difficulty of computing a correlation across not-entirely-like samples.]
And not a computer or dataset in sight; we only used the summary statistics since we're all comfortable with mean, SD, correlation coƫfficient, &c. As a result I was able to talk about the concepts (binormal; conditional normality; hints of homoscedasticity; I even got to relate the regression fallacy to my SAT example) with a bit of fluency. I suspect I may have been doing things backwards before...
no subject
Date: 2008-02-25 09:16 pm (UTC)I had no idea before reading up a bit, how many people take the SAT multiple times; nor how informative the number-of-times-taken is about the initial score. Another highly depressing fact, and also correcting completely for this is highly nontrivial.