The scenario
A mid-sized research university is developing its institutional response to generative AI. There is no single “AI project” - instead, there are dozens of overlapping conversations happening at different levels: individual instructors experimenting with ChatGPT in their courses, departments writing local policies, a provost-level working group drafting institutional guidelines, and national disciplinary organizations publishing position statements. Some faculty are enthusiastic, some are anxious, and most are somewhere in between.
This is a classic multi-level change initiative, and it’s exactly the kind of situation where the DICE Framework is useful - not because it simplifies the complexity, but because it helps people locate themselves within it.
Mapping the scenario
The table below maps the four DICE modes (Decide, Influence, Contribute, Engage) across the four organizational levels (Micro, Meso, Macro, Mega) for this scenario. Each cell describes how a person might participate at that intersection. No single person occupies all sixteen cells - the point is to identify where you are and whether your energy is well-placed.
| Decide | Influence | Contribute | Engage | |
|---|---|---|---|---|
| Mega | Vote on national disciplinary standards for AI use (e.g., accreditation body) | Shape sector conversation through professional networks, conferences, publications | Present institutional case studies at conferences; contribute to cross-institutional research projects | Attend sector-wide events on AI in higher ed; follow emerging policy and literature |
| Macro | Approve institutional AI guidelines in a governance body (e.g., academic council) | Serve on the provost’s AI working group; advocate for specific policy directions | Pilot AI tools in courses and share results with the working group; write resource guides for colleagues | Attend town halls on AI policy; read institutional communications; complete AI literacy training |
| Meso | Set department-level expectations for AI disclosure in course outlines | Facilitate faculty discussions on AI and assessment; build informal coalitions across programs | Run a reading group on AI and pedagogy; document and share what’s working in your program | Participate in a faculty learning community on AI; attend a colleague’s workshop |
| Micro | Decide whether and how to permit AI use in your own courses; set expectations in your syllabus | Have one-on-one conversations with colleagues about their AI approaches | Experiment with AI-assisted assignment design; document your process and share with your department | Try an AI tool for the first time; attend an introductory workshop; read a blog post about AI and assessment |
Reading the matrix
A few things to notice:
Most people are in the bottom-right quadrant. If you’re an instructor who attended a workshop on AI and is now trying ChatGPT in your course design process, you’re engaging at the mega and macro levels and contributing at the micro level. That’s entirely appropriate - and it’s meaningful participation in this change initiative, even though you’re nowhere near the governance table.
The diagonal is a common pattern. Many people decide at micro, influence at meso, contribute at macro, and engage at mega. This isn’t a rule - it’s just a frequent shape that emerges because formal authority tends to be strongest close to your own practice and weakest at the broadest levels.
Meso is where initiatives stall or scale. The department or faculty level is where individual experiments either get picked up by colleagues or stay isolated. An instructor who has redesigned their assessments for an AI context (contributing at micro) can only scale that work if there’s a meso-level mechanism - a learning community, a departmental conversation, a curriculum committee - that picks it up. Without meso, macro-level policy and micro-level innovation never connect.
Frustration points are predictable. A faculty member on the provost’s AI working group who tries to decide institutional policy will be frustrated - they can influence at that level, but the decision authority belongs to the governance body. An associate dean who spends all their time engaging (reading about AI, attending webinars) when their role gives them the authority to decide department-level expectations is underusing their position. The framework makes these mismatches visible.
Using this for your own context
You can build a DICE matrix for any change initiative. Pick your scenario - a new assessment policy, a platform migration, a curriculum redesign, an accessibility initiative - and ask:
- Who can Decide at each level? These are the people with formal authority. There are usually fewer of them than you think.
- Who can Influence? These are the committee members, the facilitators, the coalition-builders. They shape decisions without making them.
- Who can Contribute? These are the doers - the people experimenting, documenting, creating, modelling. Their work generates the evidence that informs decisions.
- Who is Engaging? These are the people learning, attending, reading, building understanding. They’re the next wave of contributors, and their receptive participation is what makes a community of practice function.
- Where are the gaps? If nobody is contributing at the meso level, your initiative has a scaling problem. If nobody is engaging at the macro level, your institutional policy may lack broad awareness. If someone is trying to decide at a level where they can only influence, they’re headed for burnout.
The goal isn’t to fill all sixteen cells, but rather to see the pattern clearly enough to spend your energy where it matters.
