Intersectional Decision-Support Tool
Concept: a research-backed tool providing situated guidance based on user context, drawing on social-anthropological framing of how identity, environment and incentive structures shape decisions. Scoping in progress.
The problem (working draft)
Generic AI assistants give generic advice. The advice that lands — that someone actually acts on — is situated: shaped by the specific bundle of identities, environments, constraints, and incentive structures the person sits inside. A well-meant suggestion that ignores those constraints is at best useless and at worst harmful.
This concept page documents the thinking — which is well-developed — rather than the implementation — which is being scoped. More to follow. I'll add the rest shortly.
Theoretical framing
The starting point is intersectionality, not as identity-politics shorthand, but as the original methodological observation: people sit at the intersection of multiple categories of difference, and the intersection produces decision contexts that none of the constituent categories explains on its own.
Three theoretical reference points I'm working with:
- Crenshaw on intersectionality (1989, 1991). The original argument: single-axis analysis misses the people who sit at the intersections. A decision-support tool that asks "what's your context?" along a single axis will give bad advice to anyone whose context is multi-axis — which is everyone.
- Bourdieu on habitus and field. Decisions are shaped by the dispositions a person has acquired (habitus) operating inside a structured environment (field). Useful advice has to model both the person and the field.
- Scott on legibility. Generic systems make things legible by stripping context. The interesting question for a decision-support tool is the inverse: how do you keep the context while surfacing actionable patterns?
The tool's central design question: how do you offer guidance that is specific enough to be actionable, without flattening the complexity of the person you're advising?
What I'm scoping
I'm not ready to share the implementation shape publicly — it's an active scoping project. The questions I'm working through:
- Surface. Chat? Guided flow? Embedded in another tool? The choice constrains everything else.
- Input model. How does the tool know what context the user is in? Self-report? Observation? Both?
- Output shape. Single recommendation? Range of options framed by their fit? Question that helps the user think rather than answer that thinks for them?
- Evaluation. How do you know the guidance was useful? "Did the user act on it" is too thin. "Did it help them make a better-shaped decision" is the harder question.
- Failure modes. Generic AI assistants fail by being blandly applicable. Situated tools can fail by being confidently wrong about a context they've misread. Worse failure mode.
Why I'm interested in this
Two reasons.
One: my MA was in social anthropology specifically because I find the question of how people make decisions inside specific cultural and institutional contexts genuinely fascinating. The dominant paradigm in AI tooling right now is de-contextualisation as a feature (one model that works for everyone). I think the more interesting frontier is the opposite: tooling that takes context seriously and adapts to it.
Two: every user research project I run hits the same pattern. The aggregate finding is interesting; the cluster-level finding is more interesting; the individual story that doesn't fit the cluster is most interesting of all. A decision-support tool that respected the individual fit the way a good ethnographer does would be a different kind of product.
Status
Concept-stage. Theoretical framing developed; prototype shape being decided. I'll update this page as scoping progresses.
If you've worked on something adjacent — situated AI guidance, contextual recommendation, anything where the system has to model both person and field — I'd love to compare notes. Get in touch.