March 31, 2026

How I turned the 2x2 matrix into a skill for Claude

Key points

  • The axes are the argument, while the quadrants are the consequences. A 2x2 matrix is only as good as the axis pair you choose. Most people, and most LLM outputs, default to the first obvious pair and produce a matrix that confirms what’s already known.
  • I built a skill that forces Claude to reject the obvious. Two cognitive functions: one that builds the frame, one that attacks it, work in tension before any matrix gets produced. The skill generates at least four candidate axis pairs, explicitly rejects the weak ones, and produces two materially different matrices.
  • Same exercise, but different starting points. I completed a 2x2 exercise, anchoring my answer in EU regulatory constraints (explainability as a survival question). Claude anchored in decision architecture and organizational feasibility. Neither was wrong, but they covered different dimensions of the same problem.
  • The skill extends thinking. The most useful output was seeing where my analysis had blind spots that Claude’s didn’t, and vice versa. That’s the point of reasoning with LLMs rather than delegating thinking.

I took 2x2 matrix, a thinking and decision framework I’d been learning about, and turned it into a reasoning skill for Claude. Then I gave Claude the same exact exercise I had completed myself and compared the outputs. This is what happened.

A 2x2 matrix is the simplest possible tool for making a decision when you’re stuck between multiple options and can’t see them clearly. It’s also the best tool for mapping terrain that feels muddy and undefined.

A classic example is The Eisenhower Matrix. You have two axes: urgency and importance. The axes create four quadrants:

  1. Urgent and important (do it now)
  2. Important but not urgent (schedule it)
  3. Urgent but not important (delegate it)
  4. Neither urgent nor important (drop it)

You take two criteria, the two things that matter most, and map them against each other. That creates four quadrants. Every option you’re considering goes into one of the four boxes.

The matrix itself is trivial. However, the real work happens when choosing the axes. They are the argument, while the quadrants are just the consequences.

Why I built a SKILL for it

I’ve seen people turning frameworks into skills for Claude, but most of them lack rigor. They encode the surface of the framework: “Generate X framework” without encoding what actually makes the framework work or fail.

Take the 2x2 matrix. If you just tell Claude “create a 2x2 matrix for this situation by proposing two meaningful axes, explaining the quadrants, and explaining your choice,” you’ll get something that looks right. Two axes, four quadrants, options placed neatly. But the axes will probably be generic. The placement will be tidy, and the matrix itself won’t reveal anything you didn’t know or suspect.

The reason this happens is that Claude will default to the obvious axis pairs (“cost vs. speed,” “risk vs. reward”) because those sound reasonable. But reasonable isn’t the same as revealing. A good matrix organizes information and makes a claim about what actually matters. And getting Claude to do that requires encoding not just what a 2x2 matrix is, but what makes a strong one strong and a weak one weak.

How I designed the SKILL

The first thing I realized is that the skill required cognitive functions, or instructions for how Claude should think about the matrix, not just what it should produce. So I created those two:

  • The axis creator focuses on building the frame. It asks: Which two axes create the most meaningful spread? Are they independent? Does movement on one axis automatically imply movement on the other? What hidden variables are being compressed into each axis? What worldview does this pair encode?
  • The differentiation challenger focuses on attacking the frame. It asks: Is this just the first obvious matrix? Do these axes actually cut the options apart, or do they collapse everything into one quadrant? What less obvious pair would reveal a different truth? What would an executive miss if she used only this frame?

The axis creator builds a coherent frame, while the differentiation challenger attacks it for obviousness, weak differentiation, and hidden assumptions. They’re designed to work in tension with each other. One constructs, the other pressure-tests. Each function must ask clarifying questions before proceeding with building the axes.

I also encoded something specific about axis quality. Strong axes are independent, specific, decision-relevant, differentiating, and legible. Weak axes are vague, redundant, obvious but unhelpful, non-differentiating, or containing too many dimensions at once. Claude has to explicitly test candidate axes against these criteria before selecting the final pair.

I also specified two modes: terrain mode (when you’re trying to understand a landscape, a market, a problem space) and decision mode (when you’re choosing, prioritizing, or comparing options). The axes that work for mapping a terrain are different from the axes that work for making a decision, and the skill needs to know which one it’s doing.

Finally, I built in a rule: the skill must produce two materially different matrices, not one. Two genuinely different ways of cutting the same reality. This forces Claude to go beyond the first obvious frame and show what a second lens reveals that the first one hides.

You can find the SKILL via my GitHub repository.

The experiment

When I first learned the 2x2 matrix, I had an exercise to complete.

The exercise:

You’re advising the CEO of a mid-sized European cybersecurity firm. 150 people, based in Amsterdam. They sell endpoint protection to mid-market companies. The CEO wants to expand into AI-powered threat detection and has four paths: build in-house (A), acquire a startup (B), partner with a cloud provider (C), or white-label from a vendor (D). She says: “I know I need to move into AI security but I don’t know which path to take. They all have tradeoffs.”

I completed this exercise myself. Then I gave Claude the exact same exercise, with one constraint: no additional clarifying answers, because I was giving it the same information I had when I did it.

My answer:

I chose Explainability Level (high/low) and Speed to Market (fast/slow) as my axes. My reasoning was rooted in the EU context. Under GDPR, individuals have rights related to automated decision-making. The AI Act is increasing scrutiny on AI systems. For a European cybersecurity firm, explainability isn’t optional, but a regulatory constraint that shapes everything.

Under the EU AI Act, AI systems used in critical infrastructure protection, including key cybersecurity controls for energy, transport, healthcare, and other vital sectors, are explicitly treated as high-risk when they support or automate safety-critical decisions. For those systems, the Act requires:

  • Transparent and interpretable outputs so operators can understand, validated and oversee what the system is doing.
  • Traceability and comprehensible information about capabilities, limitations, and decision logic, which is effectively a forced-explainability requirement.

Explainability depends on whether you control the model: access the underlying architecture, training data, feature engineering, ability to modify, audit infrastructure. If you don’t control the model, your explainability is limited.

So my matrix separated two options clearly: Option A (build in house) gives you high explainability but slow speed. Option B (acquire a startup) gives you high explainability and faster speed, as you’re buying a working product with full ownership. Option C (partner with a cloud provider) gives you low explainability and fast speed. You use their models, which you don’t control. Option D (white-label) gives you low explainability and slower speed, as you don’t control the model, and you still need integration and compliance work.

My recommendation was Option B, under specific conditions: acquire an EU-startup with a functional product, ideally with existing customers and AI/ML talent. This gives you the best combination of explainability and speed. You hit the ground running with a product you can audit, modify, and defend to regulators.

my answer

What Claude produced:

Claude’s output was different from mine in ways I found genuinely interesting.

Claude’s first matrix used Strategic Differentiation (low/high) and Reversibility (low/high). This reframed the decision as a commitment question: are you building a moat or renting a position? And if the bet doesn’t work, how easily can you change course?

Build in-house and acquire (A and B) clustered in the high-differentiation, low-reversibility quadrant: commitment bets. White-label (D) sat in the low-differentiation, high-reversibility quadrant: a hedge. Partner (C) fell in the middle.

The insight this matrix surfaced: the decision is fundamentally about conviction. Does the CEO believe AI threat detection will become the core differentiator in her market within 3-5 years, or will it commoditize? If it’s core, the commitment paths win. If it commoditizes, the hedge paths are smarter.

Claude’s second matrix used Organizational Absorption Cost (low/high) and Strategic Optionality Created (low/high). This reframed the decision as a feasibility question: what can a 150-person firm actually absorb, and what does each path buy you in future flexibility?

The interesting finding: there was no clean option in the ideal quadrant (high optionality, low absorption). Every path that creates real future flexibility also imposes significant organizational disruption. The matrix made explicit why the decision is stuck: the CEO intuitively feels that the best paths are exactly the ones that are hardest to execute at her firm’s size.

Claude recommended the second matrix as stronger for this CEO at this moment, reasoning that she already knows the strategic differentiation story, and that’s why she’s asking. What she probably hasn’t mapped is the organizational price of each path.

claude answer

What I noticed

My answer and Claude’s answer approached the same problem from fundamentally different starting points.

I was anchored in the regulatory landscape. The EU context dominated my thinking: GDPR, the AI Act, explainability requirements. This is a real constraint and I believe you can’t bypass it when making decisions in European markets. My axes reflected that: explainability is a regulatory survival question, and speed determines whether you’re ahead or behind.

Claude didn’t start from regulation, but from the architecture of the decision itself: what kind of bet is being made, and what it costs the organization. The EU context didn’t appear in Claude’s analysis, and that’s a gap. But what Claude produced instead was something I hadn’t considered: the organizational absorption dimension. At 150 people, an acquisition or a major build program isn’t just a strategy, but a dangerous disruption that changes the daily experience of the firm. I hadn’t thought about it that way.

Would I use Claude’s matrices over my own? Not instead of, but alongside. My regulatory lens is essential for a European firm. Claude’s organizational absorption lens is essential for a 150-person firm. The strongest advice would combine both. And that’s probably the most useful thing the experiment showed me: there are countless possibilities to co-think and reason with LLMs, achieving results that neither of you could achieve alone.