Julian Bleecker, BSEE, MSEng, Ph.D. | Decision support for uncertain, high-consequence bets
I help leadership teams make better bets under uncertainty.
I make possible futures tangible so organizations can see what they are actually committing to.
The work is useful before the roadmap exists, before a major investment hardens, or before an emerging technology becomes normal operating procedure. I use speculative prototyping and design fiction to create artifacts from plausible near futures that leaders can inspect, debate, and use.
The point is not inspiration. The point is decision-grade clarity: shared language, exposed implications, and clearer options before the organization commits.
Call Me When
There is a consequential decision to make, but the future it assumes is still too vague.
The consequences are unclear
A major product, policy, investment, or organizational decision is on the table, but the team cannot yet see what it commits them to.
The strategy feels underdefined
There is a direction, theme, or mandate, but it is still too abstract to evaluate, fund, brief, or turn into a roadmap.
Leadership is not aligned
People are using the same words but imagining different futures, different risks, and different versions of success.
Emerging technology has changed the terrain
AI or another unfamiliar technology is creating pressure to act before the organization has a shared picture of implications.
The roadmap is arriving too early
The organization is being asked to optimize, execute, or scale before the underlying bet has been made clear.
What Changes
After the work, the team has something concrete to decide with.
Outcome
Shared language
The team has a more precise way to discuss the decision, the future it assumes, and the disagreement that needs attention.
Outcome
Exposed implications
Risks, consequences, second-order effects, and operational assumptions become visible early enough to matter.
Outcome
Clearer options
Leadership can compare possible commitments with more confidence instead of mistaking a vague direction for a decision.
Engagement Models
Time-bounded ways to turn ambiguity into decision-grade clarity.
These are not generic service packages. Each model starts with a decision, mandate, or high-stakes question and works backward toward the artifacts, sessions, and synthesis needed to make that decision clearer.
Decision Clarity Sprint
When to use
Use this when a team needs to make or shape a near-term decision, but the consequences and options are still fuzzy.
What you get
A sharper decision frame, a small set of artifacts or scenarios, clearer options, exposed risks, and language the team can use immediately.
The team stops debating abstractions and starts evaluating what the decision would actually imply.
Strategic Artifact Engagement
When to use
Use this when a strategy, technology shift, market possibility, or policy question needs to be made concrete for executive debate.
What you get
Decision tools that can be inspected, challenged, circulated, and used to expose implications before commitments harden.
Artifacts make risk, alignment, and investment questions easier to see than a deck or report alone.
Executive Alignment Session
When to use
Use this when leaders need a shared picture of what they are deciding, what future they are assuming, and where disagreement actually sits.
What you get
A clearer shared frame, named tensions, stronger questions, and a more useful basis for next-step decisions.
Alignment improves when leaders can point at something tangible instead of negotiating around vague futures language.
Fractional / Advisory Partner
When to use
Use this when an organization needs senior judgment over time before the roadmap, lab, venture, or capability is fully defined.
What you get
Senior decision support, artifact-led strategy, sharper briefs, stakeholder alignment, and help translating uncertainty into options.
Some mandates need experienced pattern recognition before they need permanent headcount or a large program.
Not This
Not a trend deck. Not an inspiration workshop.
I am not most useful when the brief is to generate excitement and leave the hard choice untouched. I am useful when a leadership team needs to understand consequences, compare options, and see what a commitment would really mean.
Buyer Fit
Best fit: people with a real mandate, real stakes, and enough authority to act on what becomes clear.
Relevant contexts include executive strategy, product and venture bets, AI governance, policy questions, strategic foresight, R&D, and senior advisory work where implications matter before implementation begins.
Proof
A practice built to connect concrete artifacts, executive judgment, and real operating responsibility.
Founder / operator proof
Built and sold OMATA
Evidence that the work is not only conceptual: I have carried an uncertain product bet through product, brand, engineering, manufacturing, operations, and sale.
Method proof
Design fiction as decision support
A long-running artifact-led practice for making plausible futures concrete enough to inspect, debate, and act on.
Emerging-technology proof
AI, policy, governance
Current work focuses on making institutional, operational, and public consequences of AI tangible before they normalize.
Executive learning proof
Shared language under uncertainty
Teaching, seminars, and leadership sessions designed to improve judgment, alignment, and consequence-awareness rather than provide inspiration alone.
If you are evaluating whether this work can survive real constraints, start with OMATA. It began as an ambiguous product bet and required product, brand, engineering, manufacturing, operations, customer care, and ultimately the sale of the company.
Role Signals
The useful profile is not futurist as performer. It is senior judgment before the commitment hardens.
This can show up as an advisory engagement, a fractional leadership role, a focused decision sprint, or a larger artifact-led strategy program. The common thread is helping someone decide something important.
Selected Work
Projects that show how possible futures become tools for decision, alignment, and risk exposure.
OMATA
Founded, built, and later sold a premium hardware-and-software company around a new relationship to data, craft, and sport.
Shows founder-level judgment under uncertainty: turning a point of view into a funded, shipped, supported, and ultimately sold product business.
Near Future Laboratory / Design Fiction
Originated and developed design fiction as an artifact-based way to make future conditions tangible enough to discuss, challenge, and act on.
Makes possible futures inspectable, so leaders can see what a strategy, technology, or investment might actually commit them to.
TBD Catalog
Used the familiar language of a catalog to compress debate, surface assumptions, and explore adjacent possibilities through artifacts.
Turned unclear futures into concrete objects that exposed assumptions, risks, and options for decision makers.
Applied Intelligence
A newspaper from an AI future that reframed strategic and policy conversations through a tangible, discussable artifact.
Helps leaders move from AI anxiety or hype to specific questions about consequences, governance, trust, and organizational action.
SuperSeminar
A learning and development platform designed to cultivate shared language, judgment, and interdisciplinary thinking over time.
Shows how capability building can improve strategic judgment rather than simply create a memorable session.
General Seminar
A seminar format for structured inquiry, conversation, and collaborative learning around emerging ideas and cultural change.
Demonstrates facilitation as a way to develop judgment, alignment, and decision readiness.
Enterprise Workshops And Sprints
Worked with teams at large organizations to prototype futures, align around consequential questions, and make unfamiliar options legible.
Makes clear that sessions and sprints are mechanisms for decision clarity, not the product in themselves.
Current Focus
Current work is centered on institutions trying to reason clearly before the future arrives as default behavior.
AI Policy & Governance
Current work in AI policy and governance focuses on what happens when AI systems move into institutions, workflows, delegated authority, and public life.
The work makes trust problems, governance gaps, and real-world consequences concrete enough for leaders and publics to inspect before they become normal operating conditions.
Going Over Backwards
An in-progress book about building organizational imagination as a practical capability for perceiving unfamiliar possibilities before they fit the existing language of the institution.
It frames speculative prototyping as a disciplined way for teams to host uncertainty, test implications, and make better commitments.
Writing
Writing is part of the evidence: it shows the method, the judgment, and the long-term practice behind the engagements.
Organizational Imagination / Speculative Prototyping
An in-progress book making the case for speculative prototyping as a practical way for teams to reason about commitments before they harden.
Shows the current direction of the work: organizational imagination as decision capability, not imagination as an end in itself.
The Manual of Design Fiction
A practical and canonical reference for design fiction and the use of artifacts to render futures tangible.
This book documents a practice I developed and evolved, from a short essay into a method for making possible futures concrete enough to inspect.
It’s Time To Imagine Harder
A concise statement of the strategic posture behind the broader body of work.
Frames imagination as a disciplined way to see options, consequences, and commitments before the familiar process takes over.
Near Future Laboratory
The broader Near Future Laboratory archive includes essays, primers, and project work spanning more than 20 years of work at the intersection of design, technology, and culture.
The archive shows the depth behind the decision-facing work here: methods, projects, artifacts, reflections, and long-term practice.