Categorizing institutional AI use

I feel like we’re in a weird place with AI. I imagine most institutions are. We’ve had AI in various products for quite some time and haven’t really thought too much about it. If we’re going to think about AI across the institution, it’s important that we include the older stuff, the stuff that might not be an LLM but is still AI. We don’t necessarily have to treat it all the same way. We will need to say why we’re treating it differently.

Levels of choice

I am currently thinking about this along three basic levels. I think the levels of choice are useful for talking through the impact of AI integrations and how to communicate to people about them.

It’s also important to realize that the vendor (or the organization) may make decisions that move products between these levels at any time. I suspect moves towards more choice will be rare.

There’s also something to think about regarding whether the person is

  • using the service
  • having their content/actions observed/evaluated by the service
  • and/or having their content/actions/data used by the service for some purpose

Likely, it’ll be some combination of the three.

Required

There are some things in our system where AI is going to happen. People have no choice.

In our case, we have things like Site Improve’s analysis of our main website. Anything on there is analyzed with AI. Any scanning done on our printer/scanners is going to have OCR applied.<footnote>There may be a way out of that. I don’t actually know . . . and that’s part of the point. </footnote>

This type of integration needs to reach the highest bar of review and agreement. We’re requiring its use and the more widely-used the system is, the more important we get understanding and agreement that this is the right thing to do.

On by default (but opt out-able)

These are things that are turned on by default.  They happen unless you know that you can opt out and how to do it.

One example for us is automatic transcription for videos in Panopto. You can turn that off at the folder level. The auto-completion/smart compose feature of Google Docs is another example. I didn’t know that you could turn that off until I started writing this. I didn’t really think about it. Things like chatbots will likely fall into this category if you can get to the same content through other means, but as systems optimize towards AI-first interactions it’s likely non-AI interaction options will degrade significantly over time.  Why work on website navigation improvements if 90% of your audience uses the chat interaction?

Opt in

These are AI integrations that require you to do something to activate. I’d consider anything that’s “off” by default to also fall into this group.

An example would be turning on transcriptions in Zoom.  Another example is opting to use answer grouping in Gradescope or using UDOIT to analyze a Canvas site for accessibility.

Turned off

These are AI integration we are actively preventing.

Like things we require, we need to make sure we know why were doing this and that we communicate it effectively. Turning things off tends to drive people to outside systems where we often have no influence and a myriad of other problems occur. It’s a tricky line to walk.

Tough conversations

I am not trying to qualify these things (neither AI, nor the choices around its use) as good or bad. I’m trying to get a structure that lets us describe what is already happening. Then we cab  analyze how it lines up with institutional beliefs and opinions. It’s important we do that with what’s already in place. That will give us a foundation for the onslaught of new AI-complications we’re going to be facing in the near future/present.

It’s not going to be easy in many cases.

You end up with questions like . . . Is machine-generated captioning better than no captions? Is it better than nothing? How much would it cost to provide higher quality captions? If we correct transcriptions, does the corporation’s AI benefit from our labor? If so, is that ok? etc. etc.

2 thoughts on “Categorizing institutional AI use

  1. As for Panopto ASR captions, we have elected to leave them off site-wide, and let the “instructor” determine whether or not they are useful. Some do. So we turn them on at the folder level. But when it comes to an actual accommodation request, that goes out for captioning via the Panopto caption providers integration. And we caption for the course for the semester. As for the caption provider, dunno. They don’t explicitly talk about what they do A.I.-wise. I’ll admit we sign the contract and go about submitting requests for captions, it’s a big black box after that.

    1. For us, it’s just about consistency and communication. If we treat all AI choices the same, it’s harder to make a major mistake. Currently, we say one thing and do another . . . sometimes.

      I don’t know if some auto-captions are better than none. I think probably so. Even sans-AI, it needs communication.

Leave a Reply