A few weeks back I wrote about using Chromebooks in some of our biology labs, and now that the Acer C720 has started shipping, I ordered two of them to start testing. I’ve only had them for a day, so this is not a performance review in any way, but I will say that it seems like a very functional computer. I’m used to the 11″ MacBook Air as my daily computer, and the screen size and keyboard are on par with that, although the color gamut seems more restricted; so far the battery life seems much better than the Air.
Part of what I want to work through as I’m testing is what, exactly, is the service model I’m aiming for — what is the purpose for these? The current computers are used to run evolution and ecology simulation software and a statistics package. I didn’t even bother requesting them this semester for our new bioinformatics exercise, opting instead to encourage students to bring their own, which worked fine. So why not just continue to do that instead of investing in lab-owned notebooks? If we are going to have to virtualize some of the software anyway, why not just give students access to it on their own machines?movie Sleepless 2017 download
This would be consistent with the trendy practice known as ‘bring your own device’ (BYOD), but I’m not convinced it’s the right way to go for us. One of the biggest weaknesses of this policy for education is that it lacks any kind of predictability. I’m not referring to predictability in terms of make and model and minimum specs, I mean whether the student brought their computer that day. There is a great benefit to being able to count on certain equipment being available and functional when planning a lab. For example, I know that we have a number of nice spectrophotometers, so I can design a lab exercise that requires them. Knowing that each student or pair of students is going to have access to a computer, and knowing what that computer is capable of, changes the design of the lab, to put it simply.
Here are a few activities that come to mind:
- The lab manual could be moved online. As it stands, we have the manual printed for the students and (try to) collect the cost from them, which turns me into a cashier. This could be as simple as a PDF or as complex as a real ebook with interactive content.
- We could produce short instructional videos for routine lab techniques and link to them from the online lab manual. These would be for things like pipetting, using the spectrophotometer, setting up a TLC experiment, or even setting up a slide on the microscope, which seems like a neverending mystery to many students.
- Get into more detail on the practical side of data management and statistical testing. As it stands, we send students away and ask them to perform simple statistical tests on the data they have collected, but what they take away from this varies widely across the class. Some really get it, but others can’t get a handle on it. It would be nice to do more show-and-tell before sending them away to work alone.
- Do some real training in literature searching. We have a light requirement for incorporating primary literature into the 2 formal lab reports, but we don’t spend time in lab talking about how to do this. I’d like to change this.
I could go on with a dozen other examples, but none of these is surprising, nor do any require anything other than a computer with Internet access. But you have to know it’ll be there. Right now, the range of access to a computing device begins at ‘none’, and having a set of lab computers would drastically improve that to ‘something’. I guess that is what I find so attractive about this whole idea: it offers a ‘technology floor’ where there is none now.
The idea of a technology floor works on a number of levels here. It supports the objectives we decide on teaching toward in any particular lab, that’s its primary job. But it also doesn’t have to remain exposed, students could choose to bring an equivalent computer of their own and use it. I’m thinking of the difference between vinyl flooring and travertine tile — they look and feel quite different, but ultimately serve the same function.
Once the court voids the nondiscrimination rule, AT&T, Verizon, and Comcast will be able to deliver some sites and services more quickly and reliably than others for any reason. Whim. Envy. Ignorance. Competition. Vengeance. Whatever. Or, no reason at all.
Smart article on Salon about the apparent strategy adopted by the big MOOC providers, which echoes that of the voucher/charter school proponents of the last two decades:
The plan is simple. First, declare a crisis in education that doesn’t actually exist. Second, declare that a for-profit model can fix the crisis. (This is easy when you get to invent the particular calamity.) Third, rather than starting small and building empirical support from experts in the field, seek sweeping legislative changes that lock your position into the system.
This isn’t, however, a head-in-the-sand piece about how everything is fine in higher ed. They point out that the MOOC providers have fabricated a story about how the problem is about access to college due to costs, when the real problem is about retention and degree completion.
The University of Kentucky is making a major investment in data analytics to try to improve student retention. The approach is described in an article at Inside Higher Ed:
Every time students open the app to check their course schedule or the date for the next Wildcats game, they may be faced with a quick question: Have you bought all your textbooks already? Do you own a tablet? On a scale from one to five, how stressed are you?
The university collects a student’s responses to these kinds of questions on a per-student basis. To that record, they also add a student’s interactions with the campus LMS and participation in campus events, which are tracked through a card swipe-based attendance and incentive system.
All of these systems alone represent a big investment in tracking, but analytics is about doing something with all that data. UK has made a major push to make meaning from the data by hiring a team of 15 data analysts to develop and refine a predictive model of student engagement. The end goal is to increase retention rates which, assuming they’re even marginally successful, will more than pay for the investment in all the staff and databases.
- The cost of attendance in-state is about $20,000, and $30,000 for out-of-state (source)
- The average financial aid award is about $10,000
- So net revenue per student is about $10,000-$20,000 (assuming in-state students); let’s call it $15,000 for simplicity’s sake.
- The freshman enrollment was about 4300 students
- A 1% increase in retention is 43 students
- 43 × $15,000 = $645,000 additional revenue
- $645,000 × 4 yrs = $2,580,000
- $2,580,000 ÷ 15 staff = $172,000 per additional staff line
And that’s making very conservative estimates throughout. That’s also not including the cost savings on the enrollment side of not needing to recruit as large a class.
Audrey Watters, who writes at Hack Education, has posted a transcript of a talk she gave at Columbia as part of their Conversations About Online Learning series. Setting aside the envy I have of a place that holds a lecture series about technology and higher learning, Watters goes deep on some of the implications of “data mining” in education, fleshing out some of the ways such data might be used and pointing out how risky that might be for students.
…all this data that students create, that software can track, and that engineers and educators and administrators can analyze will bring about a more “personalized,” a more responsive, a more efficient school system.
How will this magic happen? Using the same secret algorithmic sauce that companies like Google use to tailor search results and ads, and Amazon uses to sell you, well, pretty much anything. So what’s the hitch? There are at least two, according to Watters: privacy and money.
It may be obvious, but if data is going to make a big difference in student learning, that is going to require a sea change in the rules surrounding access to that data. Or is it? It appears that right now, the rules are being skirted by private companies that don’t have the same restrictions as actual schools. I suspect that most students and their parents aren’t aware of this end run around educational data privacy. It is access to this kind of data that will be necessary to assist with learning, in the absence of actual human interaction.
And the money? It’s not money in the sense of cost to students. On the contrary, most ‘big data’ education projects are free to the student, meaning someone else is paying. For now, the bills are being paid by venture capital investors that are expecting BIG returns. We’re in the early days, the thinking goes, of a major shift in the way education is done, and one of the biggest parts of this shift is the privatization of education. Sure, there has been some suggestion that these programs will lead to a system of credentials not unlike a degree, and some programs have even been rolled out. But for the most part, the schools with the biggest stakes in this territory thus far are not talking about any kind of equivalency between their live and online programs.
We’re working our way through the major kinds of macromolecules in my Intro Cell Biology class — Carbohydrates, Lipids, Proteins, Nucleic Acids. Today I taught on the composition and structure of proteins. I really like the topic because it’s a chance to bring in so many of the concepts we’ve already discussed, like hydrogen bonding, polar vs. non-polar molecules, etc.
It’s also a topic that has been taught thousands of times before. A quick search on YouTube reveals hundreds of short videos, some of which are quite good, covering the same material. Yet there we were, talking about the same things: amino acids and peptide bonds, tertiary structure and protein folding. Why?
As best as I can tell, it’s because there we were, all together, all thinking about the same thing at the same time in the same place. Some of us understand more about it than others. Some have questions about it. But we all had made a commitment to be there with each other with the shared purpose of learning about proteins today. And I think it worked.
I’m starting to at least think about getting back into the flow after a few weeks away from the usual schedule. At the end of July I spent about a week at the annual meeting of the American Society of Plant Biologists, which was a great meeting for me this year. I heard lots of great science and got some excellent input and feedback from people I respect and admire. I also met a few new people and have some possible collaborations simmering now.
Since returning from the meeting, I’ve been mostly relaxing and taking some time away from the office. I’m getting some projects done around the house and spending time with the kiddos as much as I can before they start back to school next week.
While I’ve been away, it looks like Fargo has been doing everything but sitting still. I used it heavily to take notes on lectures and conversations while I was at the conference, so I’ve dabbled with some of the scripts in my menubar. I’m looking forward to thinking more about how to incorporate it into my class this fall.
Most of the day I worked on our poster, taking the opportunity to do lots of statistical tests to prepare for writing the manuscript, which comes next.
One of the comparisons I want to make is within a given treatment, across different time points. I’ve come up with a heat map presentation that I think I’m happy with, but I’m not totally sure yet.
In this graphic, green indicates a significant increase, while pink represents a significant decrease. I think this highlights the key points I’m trying to make, but I’m going to sleep on it.
Tomorrow I’ll try to write the rest of the explanatory text so I have a few days to let it mellow before sending it off to the printer.