?> INFOMATTERS | Applying a Third Force to the Architecture of Information

Seeing accreditation from other sides

I remember when studying perception in undergrad psychology that the term ‘cue aware’ started to fall into our vocabulary to reflect the experience we have of increased experience of phenomena once we knew of their existence. I feel this way about accreditation. Every book or article I find myself reading now seems to have some example of accreditation within. Up first, a couple of articles in the latest issue of Law Library Journal, where James Milles, professor at SUNY Buffalo states in his title that ‘law libraries are doomed’ This is followed by a rejoinder by Kenneth Hirsh, professor at Cincinnati, suggesting it might not be quite that bad. In both, there is much discussion of law school accreditation and how it basically does little to protect the old library function that some argue is central to the great law school experience. Perhaps most telling from some perspectives, the new standards on ABA accreditation no longer require that a law library director have both a law and LIS degree, this now being softened to ‘should’ with numerous cases cited of where even this urging is ignored.

The general arguments about law education are familiar to folks who follow LIS literature — the disconnect between the faculty and the profession, the diminishing demand for the degree, the costs of education, the shifts in research and reading behavior in an online world. So is it reassuring or worrying that law schools are sharing the same pain?

Hot on the heels of this I land on a report from the Senate Committee on Health, Education, Labor and Pensions which tackles higher education accreditation head-on. In it, the major criticisms of current accreditation models are stated (familiarly to those of us who read this type of stuff) — it doesn’t reflect quality, it stifles innovation, it’s costly, burdensome and bureaucratic etc. Their recommendations are of the kind some of us have suggested e.g., refocus on quality not compliance, allow more flexibility in review processes, etc. It’s not rocket science, but one imagines we’d never have got a rocket into space if accreditation had been applied to those engineers and scientists.

Chinese radio on reading

I had the pleasure of being interviewed on Chinese Radio International’s English language station this week. Seems there is a lot of interest there on the impact of new media on people’s reading habits and the government is concerned that the average citizen only reads five books a year. Hum….maybe we’re not so different here, though adults do report reading about 17 per year on average. The trouble with all reading estimates is that people exaggerate, since reading is such a socially-desirable behavior. It’s a bit like that with estimates of library visitations, nobody wants to give the appearance of barbarism so yes, of course we all go there regularly, right? Anyway, the interview was times to mark World Book Day, which seemed to pass most of the world by — HERE’S the link

When blogging is life and death

Most comments on the dangers of social media and blogging tend toward warnings about off-the-cuff comments or presenting a public face that you will not be ashamed of in a year’s time when meeting someone new or applying for a job. Jon Ronson’s new book ‘So you’ve been publicly shamed‘ is bringing back and shedding some new light on the well known examples such as the woman tweeting before getting on a flight from UK to South Africa and disembarking hours later to find she’d created a maelstrom of hate by her supposedly off the cuff comment about AIDS. People really do use the tools to humiliate other and the cost, Ronson argues, can be to make others unwilling to speak freely as we collectively get sucked into groupthink. All true and bad, one imagines, but it can be even worse.

The mainstream media have given more attention to this new book than they have the fact that once again, a blogger who espouses atheism has been murdered because of their words they use. In Bangladesh, a blogger was hacked to death this week. Washiqur Rahman was attacked in the street, in daylight. His ‘crime’ was writing about the dangers of religious fundamentalism. He was right. But he was not alone. Earlier this year another blogger, American Avijit Roy was murdered by what are described as machete-wielding assailants while returning from a book fair with his wife (who lost a finger in the attack). Three bloggers have been so murdered in the last two years in that country. And of course, this is on top of the case in Saudi Arabia where public flogging of a blogger for ‘insulting Islam’ actually brought a murmur or two of disapproval from international allies.

One of the less known aspects of free speech suppression (which is everywhere) is that aethesists are among the most suppressed groups. It is estimated that espousing atheism is a crime punishable by death in 13 countries:Afghanistan, Iran, Malaysia, Maldives, Mauritania, Nigeria, Pakistan, Qatar, Saudi Arabia, Somalia, Sudan, United Arab Emirates and Yemen. And that’s just the list of countries where it is enacted as law. There are many more where crimes against atheists are largely ignored and rarely persecuted. And yet religious groups continually campaign that they are the ones who feel persecuted and need laws protecting them. Protect one, protect all surely — is not that a fundamental of all major religions? Those who speak out and pay the ultimate price deserve more than a small column in the euphemistically titled ‘free press’.

The real point here is that I believe shaming others for ignorant tweets is likely a lower point on the same continuum of crowd-hysteria that leads to machete murder of bloggers. This is a concern for people who use social media to chastise but never imagine themselves as fanatics or bad people. The technologies underlying rapid shaming and the behaviors they enable should be studied as more than a curiosity of our age or as a marketing vehicle for corporate identity and personal image making. But I guess there’s less money or fame in that type of work. Come in Information Science……there’s a research question to answer.

The accreditation issue again

I’ve been surprised at the reaction to my earlier plea for accreditation reform (see below) with more than a few people contacting me offline to offer support but in doing so, revealing that they did not feel able to say this out loud in their own schools and departments. That is truly worrying. IF we cannot openly discuss this because of fear among faculty, then something is really wrong. Nearly as worrying, but probably with an ironic twist, it was pointed out to me that the Williamson Report of 1923 invoked the need for ALA-related accreditation as the schools of the time were felt to be unable to raise standards. Well now look where we are.

I seem to find myself on the same side as the American Council of Trustees and Alumni, ACTA, though a close reading of their various publications gives me pause. Let’s just say, we share the same concern that accreditation no long ensures quality, and leave it there.

The real point though is that everyone, in principle, believes accreditation should ensure a certain standard of educational experience. When then, did this setting of standards become so tied to processes of endless review and targets that show so little relevance to real world needs? Maybe ACTA are not so far off when they state that too often accrediting agencies act as monopolies, are a costly nuisance and offer no guarantees of quality. Surely it’s time to revisit this whole mess?

KM meets ML – Information the driver for leveraging distributed expertise

Interesting talk from Jean Claude Monney, now leading KM initiatives at Microsoft. I am generally disappointed in most KM discussions, they seem strong on claims, short on evidence and spend a lot of time trying to change people’s behavior despite everything we know about how humans and organizations operate. That said, sometimes people do push this area forward. Give it a listen – this is short on visuals but there are some deep issues discussed within. Time for a KM comeback?

Achieving Excellence in Global Value Chain – Jean-Claude Monney Group VP STMicroelectronics from Jean-Claude F. Monney on Vimeo.

iSchool and Iron Mountain launching new partnership

Am delighted that we’re engaging in a series of open educational sessions with Iron Mountain — it’s a wonderful relationship for us, Iron Mountain are great to work with and this promises to open up new avenues for the study of information management outside of the traditional approaches. See more here   The launch event is this week at the AT&T Conference Center here at UT. Open to all, and watch for new events.

Please reform accreditation

The annual Deans and Directors meeting at ALISE this year proved refreshingly robust. We had but one real topic, the accreditation process pursued by the ALA  Committee on Accreditation. There is a proposal afoot to reduce the number of standards from six to five. This alone is worthy of celebration as ALA follows the laughable requirement of having one person per standard when forming site teams to visit programs. There is almost no justification for this but tradition, and consequently, site teams have arrived at schools outnumbering the tenure-track faculty. Since no one seems to be laughing, especially those who foot the bill for this extravagance, it would at least seem as if this merging of a couple of standards has one tangible benefit for programs.

That said, the discussion quickly moved on from wordsmithing the standards to challenging the whole process, and it was not just a minority of folks who pushed for reform. Speaker after speaker complained of the persistent disconnect between the review by the site team and the final decisions from the politburo committee, the slavish insistence on over-documenting learning outcomes, the constant demands for reports, reports and even more reports (usually about very little), the credentials of those conducting the review, and in some case, the embarrassment teams cause to programs by their obvious lack of  familiarity with university standards when dealing with upper administrations. Sadly, there was also a feeling in the room that one must be careful raising objections or one’s program will face retribution for speaking out (hence my temperate comments here). It really is hard to imagine that anyone believes this is a voluntary, collegial process anymore. Does it surprise you that only now, after years of campaigning,  the deans and directors will actually have a representative at the table when a new committee (we need more!) is formed to consider the problems?

Despite what one imagines, deans and directors like to do more than just complain (yes, it’s hard to resist the line that we leave this to the faculty–rimshot please!), we actually considered some alternatives. These included reducing the number and lengths of reports between reviews, using existing statistical data rather than forcing repeated submissions, lengthening the time between review visits, and getting more faculty involved in the final review committee. All sensible options, but I’d like to suggest we go further.

Accreditation, for all its flaws, is essentially about quality control, but somewhere along the line, the emphasis on quality has taken a backseat to control. There are many reasons which I won’t rehash here, but no matter the motivations, the results are obvious. Programs are expected to comply to language, measures and indices that reveal little about quality and more about allegiance. Take for example, the rather important matter of graduate placement. Certainly it is used by potential students, it might reasonably be interpreted as a measure of how well a program prepares new professionals for their careers, and it is based on the input of external employers, but it’s not mentioned specifically in the standards. One could meet all the requirements for accreditation, articulating all the specific learning outcomes for each course, and yet reveal nothing about the real job prospects and advancement of the students who come for this education. Is it any wonder we hear so many accounts of disgruntled, poorly paid graduates who feel their Master’s degree was not quite all it promised to be?

How hard could it be to identify and document indices of quality? I would suggest there are some basic measures we can all agree offer us some clues as a program’s overall quality:

  • Faculty size and rank
  • Graduation rate
  • Employment rate of graduates
  • Budget and resources
  • Curricular coverage

Surely there are others but let’s consider these for a moment. If a program has e.g., 12 faculty, all on tenure-track, this tells us something. If it has 5, one of whom is a part-timer and only two of whom are on tenure track, this tells us something else. No, it’s not automatically the case the the first is to be accredited and the second not, but it does give us a real data point. Having sufficient faculty is important. Having these faculty be on tenure-track tells us about the university in which the program exists and how it views the program. And having these same faculty deliver the courses that make up the program tells us something more. Similarly with budget. These are hard numbers which obviously vary across regions and universities but there is surely a minimum,  secure, recurring funding level that a faculty of a certain size must have to deliver a graduate program. We can make the same estimates on space or technical infrastructure for programs, a basic threshold at which we can be confident a program really is able to exist and deliver instruction. And yes, let’s measure employment rate. It is not a perfect score, there are none, but if your graduates are in demand and earning decent salaries over time, this suggests the professional community must be satisfied to some extent with your program’s efforts. If you cannot demonstrate this, then maybe it suggests that what you are providing is not quite up to professional standards.

You can see where this is going. I would allow for small schools,  or those just starting up, to make a case for themselves by emphasizing some measures over others. Mature programs should be able to demonstrate relatively objectively how they are resourced, what faculty standards they maintain, how they deliver the program and where their graduates go upon completion. Such reporting need not be onerous. Certainly there is room for a narrative report on the program’s emphasis,  mission, plans and general philosophy, but this would be wrapped around some hard data of the kind outlined above and used to justify the claims to quality.  There is surely a form of Turing test for programs we could apply here — answer the questions and let a normal evaluator determine if you are running a solid program or a diploma mill.

The second part of this would be to revisit the mechanisms of reviews. If a program was small or new, unable to document some key aspects such as placement or curricular coverage by appropriate faculty, or if the budget and resources seemed to prevent appropriate instructional delivery, then by all means send in a review team and make some specific recommendations. If a program decides to revisit its mission, is merged or generally undergoes a major change of direction, then send in a review team. But for most programs, once established and able to continually document their capabilities using data, let them do so by reporting every few years how they are doing using this agreed data set.  I suggest that this need not be difficult. If enrolments are healthy, faculty are strong and actively delivering the program rather than leaving it to adjuncts, and graduates can report healthy employment prospects in relevant professional roles, then it’s likely the program is doing something right. There are certainly more data points  and explanation to add but these basic measures of quality are essential — without them, something is likely in need of attention.

Most schools are already overburdened by compliance reporting and university-wide accreditation processes. Adding more to the process really does not seem to add value.  The shift to more data-driven reporting of agreed quality indices (and can anyone seriously argue against graduate employment as one such index?) would allow for some flexibility in review, not foist a one-size-fits-all cycle on every program or allow increasingly obsessive attention to secondary processes to dominate the review. Some programs would have a site visit, some would not. Some would be required to justify developments, others would be able to continue as they are doing if the data made their case. Schools would in some sense be able to tailor reviews as best fit their needs and we might move toward that more collegial, voluntary process of quality control that we are told is at the heart of accreditation.  That it might also shake out a few of the programs that are failing to deliver anything of real value would be a bonus, but I am sure none of us knows any of those.