?> INFOMATTERS | Applying a Third Force to the Architecture of Information

The accreditation issue again

I’ve been surprised at the reaction to my earlier plea for accreditation reform (see below) with more than a few people contacting me offline to offer support but in doing so, revealing that they did not feel able to say this out loud in their own schools and departments. That is truly worrying. IF we cannot openly discuss this because of fear among faculty, then something is really wrong. Nearly as worrying, but probably with an ironic twist, it was pointed out to me that the Williamson Report of 1923 invoked the need for ALA-related accreditation as the schools of the time were felt to be unable to raise standards. Well now look where we are.

I seem to find myself on the same side as the American Council of Trustees and Alumni, ACTA, though a close reading of their various publications gives me pause. Let’s just say, we share the same concern that accreditation no long ensures quality, and leave it there.

The real point though is that everyone, in principle, believes accreditation should ensure a certain standard of educational experience. When then, did this setting of standards become so tied to processes of endless review and targets that show so little relevance to real world needs? Maybe ACTA are not so far off when they state that too often accrediting agencies act as monopolies, are a costly nuisance and offer no guarantees of quality. Surely it’s time to revisit this whole mess?

KM meets ML – Information the driver for leveraging distributed expertise

Interesting talk from Jean Claude Monney, now leading KM initiatives at Microsoft. I am generally disappointed in most KM discussions, they seem strong on claims, short on evidence and spend a lot of time trying to change people’s behavior despite everything we know about how humans and organizations operate. That said, sometimes people do push this area forward. Give it a listen – this is short on visuals but there are some deep issues discussed within. Time for a KM comeback?

Achieving Excellence in Global Value Chain – Jean-Claude Monney Group VP STMicroelectronics from Jean-Claude F. Monney on Vimeo.

iSchool and Iron Mountain launching new partnership

Am delighted that we’re engaging in a series of open educational sessions with Iron Mountain — it’s a wonderful relationship for us, Iron Mountain are great to work with and this promises to open up new avenues for the study of information management outside of the traditional approaches. See more here   The launch event is this week at the AT&T Conference Center here at UT. Open to all, and watch for new events.

Please reform accreditation

The annual Deans and Directors meeting at ALISE this year proved refreshingly robust. We had but one real topic, the accreditation process pursued by the ALA  Committee on Accreditation. There is a proposal afoot to reduce the number of standards from six to five. This alone is worthy of celebration as ALA follows the laughable requirement of having one person per standard when forming site teams to visit programs. There is almost no justification for this but tradition, and consequently, site teams have arrived at schools outnumbering the tenure-track faculty. Since no one seems to be laughing, especially those who foot the bill for this extravagance, it would at least seem as if this merging of a couple of standards has one tangible benefit for programs.

That said, the discussion quickly moved on from wordsmithing the standards to challenging the whole process, and it was not just a minority of folks who pushed for reform. Speaker after speaker complained of the persistent disconnect between the review by the site team and the final decisions from the politburo committee, the slavish insistence on over-documenting learning outcomes, the constant demands for reports, reports and even more reports (usually about very little), the credentials of those conducting the review, and in some case, the embarrassment teams cause to programs by their obvious lack of  familiarity with university standards when dealing with upper administrations. Sadly, there was also a feeling in the room that one must be careful raising objections or one’s program will face retribution for speaking out (hence my temperate comments here). It really is hard to imagine that anyone believes this is a voluntary, collegial process anymore. Does it surprise you that only now, after years of campaigning,  the deans and directors will actually have a representative at the table when a new committee (we need more!) is formed to consider the problems?

Despite what one imagines, deans and directors like to do more than just complain (yes, it’s hard to resist the line that we leave this to the faculty–rimshot please!), we actually considered some alternatives. These included reducing the number and lengths of reports between reviews, using existing statistical data rather than forcing repeated submissions, lengthening the time between review visits, and getting more faculty involved in the final review committee. All sensible options, but I’d like to suggest we go further.

Accreditation, for all its flaws, is essentially about quality control, but somewhere along the line, the emphasis on quality has taken a backseat to control. There are many reasons which I won’t rehash here, but no matter the motivations, the results are obvious. Programs are expected to comply to language, measures and indices that reveal little about quality and more about allegiance. Take for example, the rather important matter of graduate placement. Certainly it is used by potential students, it might reasonably be interpreted as a measure of how well a program prepares new professionals for their careers, and it is based on the input of external employers, but it’s not mentioned specifically in the standards. One could meet all the requirements for accreditation, articulating all the specific learning outcomes for each course, and yet reveal nothing about the real job prospects and advancement of the students who come for this education. Is it any wonder we hear so many accounts of disgruntled, poorly paid graduates who feel their Master’s degree was not quite all it promised to be?

How hard could it be to identify and document indices of quality? I would suggest there are some basic measures we can all agree offer us some clues as a program’s overall quality:

  • Faculty size and rank
  • Graduation rate
  • Employment rate of graduates
  • Budget and resources
  • Curricular coverage

Surely there are others but let’s consider these for a moment. If a program has e.g., 12 faculty, all on tenure-track, this tells us something. If it has 5, one of whom is a part-timer and only two of whom are on tenure track, this tells us something else. No, it’s not automatically the case the the first is to be accredited and the second not, but it does give us a real data point. Having sufficient faculty is important. Having these faculty be on tenure-track tells us about the university in which the program exists and how it views the program. And having these same faculty deliver the courses that make up the program tells us something more. Similarly with budget. These are hard numbers which obviously vary across regions and universities but there is surely a minimum,  secure, recurring funding level that a faculty of a certain size must have to deliver a graduate program. We can make the same estimates on space or technical infrastructure for programs, a basic threshold at which we can be confident a program really is able to exist and deliver instruction. And yes, let’s measure employment rate. It is not a perfect score, there are none, but if your graduates are in demand and earning decent salaries over time, this suggests the professional community must be satisfied to some extent with your program’s efforts. If you cannot demonstrate this, then maybe it suggests that what you are providing is not quite up to professional standards.

You can see where this is going. I would allow for small schools,  or those just starting up, to make a case for themselves by emphasizing some measures over others. Mature programs should be able to demonstrate relatively objectively how they are resourced, what faculty standards they maintain, how they deliver the program and where their graduates go upon completion. Such reporting need not be onerous. Certainly there is room for a narrative report on the program’s emphasis,  mission, plans and general philosophy, but this would be wrapped around some hard data of the kind outlined above and used to justify the claims to quality.  There is surely a form of Turing test for programs we could apply here — answer the questions and let a normal evaluator determine if you are running a solid program or a diploma mill.

The second part of this would be to revisit the mechanisms of reviews. If a program was small or new, unable to document some key aspects such as placement or curricular coverage by appropriate faculty, or if the budget and resources seemed to prevent appropriate instructional delivery, then by all means send in a review team and make some specific recommendations. If a program decides to revisit its mission, is merged or generally undergoes a major change of direction, then send in a review team. But for most programs, once established and able to continually document their capabilities using data, let them do so by reporting every few years how they are doing using this agreed data set.  I suggest that this need not be difficult. If enrolments are healthy, faculty are strong and actively delivering the program rather than leaving it to adjuncts, and graduates can report healthy employment prospects in relevant professional roles, then it’s likely the program is doing something right. There are certainly more data points  and explanation to add but these basic measures of quality are essential — without them, something is likely in need of attention.

Most schools are already overburdened by compliance reporting and university-wide accreditation processes. Adding more to the process really does not seem to add value.  The shift to more data-driven reporting of agreed quality indices (and can anyone seriously argue against graduate employment as one such index?) would allow for some flexibility in review, not foist a one-size-fits-all cycle on every program or allow increasingly obsessive attention to secondary processes to dominate the review. Some programs would have a site visit, some would not. Some would be required to justify developments, others would be able to continue as they are doing if the data made their case. Schools would in some sense be able to tailor reviews as best fit their needs and we might move toward that more collegial, voluntary process of quality control that we are told is at the heart of accreditation.  That it might also shake out a few of the programs that are failing to deliver anything of real value would be a bonus, but I am sure none of us knows any of those.



The new world order is scarily familiar

Two somewhat unrelated news items caught my eye this week and suggested there is a long way to go before we understand what the new technologies of information mean for our world, and, more importantly, how to leverage their benefits.  News of the death of King Abdullah of Saudi Arabia will dominate news now from that country, perhaps deflecting the rather more terrifying coverage of Raid Badawi’s treatment at the hands of authorities. In case you missed it, he’s been sentenced to 1000 lashes, 10 years of imprisonment and fined close to $250,000 for blogging. Yes, you read that right – for blogging. Not hate crimes, not some imagined insult to a god, not murder, just blogging. And if you read his blog, you will note that the writings are generally smart, insightful and aimed at encouraging intelligent discussion.   Oh, and don’t forget, Saudi Arabia is one of our allies.

Now, looking at that link above, think about this. A federal judge in Dallas yesterday sentenced journalist  Barrett Brown to jail for another five years (he’s served more than one already) for  providing a link to hacked material. You can add almost a $1m fine to that too. But at least he did not get any lashes, right?

Journalists rightly point to the chilling effect Brown’s sentence has on investigative reporting, arguing that if one accidentally linked to hacked data, such as some of those leaked customer files so many companies seem to have a hard time securing, you would likely be similarly prosecuted. Showing distinctly more sangfroid at the news than I would in his shoes, Brown stated :

“Good news! — The U.S. government decided today that because I did such a good job investigating the cyber-industrial complex, they’re now going to send me to investigate the prison-industrial complex. For the next 35 months, I’ll be provided with free food, clothes, and housing as I seek to expose wrongdoing by Bureau of Prisons officials and staff and otherwise report on news and culture in the world’s greatest prison system. I want to thank the Department of Justice for having put so much time and energy into advocating on my behalf; rather than holding a grudge against me for the two years of work I put into in bringing attention to a DOJ-linked campaign to harass and discredit journalists like Glenn Greenwald, the agency instead labored tirelessly to ensure that I received this very prestigious assignment. — Wish me luck!”

He won’t be the only one who needs it!


Enough with the ‘entrepreneurialsm’ already….

Everywhere you go in academia these days (or on social media) you find people harping on about being entrepreneurial or innovative. We are supposed to aspire to think big, think different, and to disrupt the status quo. Hell, you can even make up new words to show how unique you are (or if you are self-titled ‘thought-leader, speculatist and acclaimed thought-leader’ Troy Hitch, you can even forget the old rule about not repeating yourself, THAT’s how innovative he is). It used to be that everyone aspired to be ‘ethical’, at least in the wake of Wall St scandals a few years back, but now the BS has been upped a notch or two and people are desperate to demonstrate their credentials as genuine creative forces bent on shaping the future, seizing the opportunity, taking the uncharted course, starting the next Facebook, mining the big data,  having double visions (ok, I made that last bit up).

How pin-prickingly deflationary it must be then for such folks to read the latest National Association of Colleges and Employers survey of what employers actually want in new hires. No, the old staple of ‘communication skills’ is not #1 (but it remains highly sought). Nor is it even ‘ethics’ but ‘good work ethic’ is required. The #1 is Leadership, joint with Ability to Work in a Team (oh dear!). Being an entrepreneur? Ranked 17th, with only 25% of employers thinking it important. Being creative? That’s 19th, rated as less important than being tactful. Being a speculatist? Sorry, no need to apply.

iSchool faculty in Top 5 UT Inventions of 2014


What is it? Ciaran Trace, assistant professor in the School of Information, and Luis Francisco-Revilla, research associate at the Texas Advanced Computing Center, created software for a large touch-screen, table-top computer called an Augmented Processing Table (APT). The APT helps archivists and curators to better access, share and process both physical and born-digital materials.

Tell me more: The invention garnered the first Archival Innovator Award from the Society of American Archivists in 2013, with the team’s work being described as, “groundbreaking, overcoming professional and philosophical boundaries, embracing innovative ideas and emerging technology, and rethinking current standards and commonly-used models for arrangement and description in modern archives.” Ultimately, APT Research Team’s work will not only help people in the field of archival science follow best practices for processing but also will increase and enhance access to “reliable, accurate and trustworthy collections of information.”